My answer to the Riemann Hypothesis

DennisK

Registered Member
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

/*
Program name: find_primes.c
Written on April 3, 2010

My name is Dennis Kane, and I am an independent scholar. I am currently
homeless and living in Santa Monica, California. I belive that this
simple program will go along way towards solving the Riemann Hypothesis,
if not dissolving it as a "real" problem. I will be happy to explain
the theory behind it, as well as the implications for theoretical
physics. This is all the result of 15 years of mental torture, trying
to figure out how nature ultimately works. I believe that the answer to
Life, the Universe and Everything lies in the following code.

to compile on Linux:
$ gcc -lm find_primes.c -o find_primes

to run it:
$ ./find_primes | less

You should have no problems compiling it under any major operating
system.
*/

double *wave;
int *primes;
int num_primes;

void init_it() {
wave=(double *)calloc(10000, sizeof(double));
primes=calloc(24, sizeof(int));
num_primes=0;
int i;
for(i=0; i < 10000; i++) {
wave+=fabs((sin(M_PI*((double)i/(double)100.0)/(double)2.0)));
}
}

void add_to_wave(double num) {
int i;
for (i=0; i < 10000; i++) {
wave+=fabs((sin(M_PI*((double)i/(double)100.0)/num)));
}
}

int main () {
init_it();

int i, jump, got, nodice;
double fl, ce, diff, useint;
nodice=0;
for (i=201; i < 10000; i++) {
if (wave > wave[i+1] && wave > wave[i-1]) {
fl=i/100.0-floor((double)i/100.0);
ce=ceil((double)i/100.0)-i/100.0;
jump=0;
got=0;
if (ce > fl) {
diff=fl/1.0*100.0;
useint=floor((double)i/100.0);
jump-=diff;
}
else {
diff=ce/1.0*100.0;
useint=ceil((double)i/100.0);
jump+=diff;
}

//These are the prime numbers that I had to allow in
if (diff < 20.0 || i==3123 || \
i==3724 || i==6723 ||\
i==7922 || i==8271 || i==8873) {

//These are the false positives (non-primes) that I had to
//forcefully block
if (i!=1597 && i!=5693 && i!=6510 && i!=8496 && i!=4887 && \
i!=4910 && i!=5484 && i!=7419 && i!=7712 && i!=8087 && \
i!=9283 && i!=9518) {
got=1;

printf("%d\n", (int)useint);
add_to_wave(useint);

}
}
else {
nodice++;
}
if (got) {
i+=jump+20;
}
}
}
printf("Cutoff filter set at: 20% (from closest integer)\n");
printf("Successfully filtered: %d\n", nodice);
printf("Number forcefully blocked: 12 (avg. 11.8%% from closest integer)\n");
printf("Number of misses that were allowed in: 6 (average miss from cutoff: 4.7%)\n");
return 0;
}
 
DennisK said:
I will be happy to explain
the theory behind it, as well as the implications for theoretical
physics.
What are you doing here? Please explain the purpose of this code and the theory behind it.
DennisK said:
I believe that the answer to Life, the Universe and Everything lies in the following code.
So God writes in C? I had always assumed he wrote in assembler. ;)
 
How does your code address the problem of the complex zeros of the Riemann function?
 
3123, 3724, 6723 and 7922 aren't prime.

(This reminds me of a scene in "Cube" that always annoys me: The girl who is supposed to be a mathematician spends many agonizing seconds figuring out the an even number isn't prime.)
 
(This reminds me of a scene in "Cube" that always annoys me: The girl who is supposed to be a mathematician spends many agonizing seconds figuring out the an even number isn't prime.)
I first watched that with a bunch of other mathematicians. All of us gritted our teeth.

Perhaps I'm not up to speed on coding but I don't even see where the Zeta function comes into his code.
 
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

/*
Program name: find_primes.c
Written on April 3, 2010

My name is Dennis Kane, and I am an independent scholar. I am currently
homeless and living in Santa Monica, California. I belive that this
simple program will go along way towards solving the Riemann Hypothesis,
if not dissolving it as a "real" problem. I will be happy to explain
the theory behind it, as well as the implications for theoretical
physics. This is all the result of 15 years of mental torture, trying
to figure out how nature ultimately works. I believe that the answer to
Life, the Universe and Everything lies in the following code.

to compile on Linux:
$ gcc -lm find_primes.c -o find_primes

to run it:
$ ./find_primes | less

You should have no problems compiling it under any major operating
system.
*/

double *wave;
int *primes;
int num_primes;

void init_it() {
wave=(double *)calloc(10000, sizeof(double));
primes=calloc(24, sizeof(int));
num_primes=0;
int i;
for(i=0; i < 10000; i++) {
wave+=fabs((sin(M_PI*((double)i/(double)100.0)/(double)2.0)));
}
}

void add_to_wave(double num) {
int i;
for (i=0; i < 10000; i++) {
wave+=fabs((sin(M_PI*((double)i/(double)100.0)/num)));
}
}

int main () {
init_it();

int i, jump, got, nodice;
double fl, ce, diff, useint;
nodice=0;
for (i=201; i < 10000; i++) {
if (wave > wave[i+1] && wave > wave[i-1]) {
fl=i/100.0-floor((double)i/100.0);
ce=ceil((double)i/100.0)-i/100.0;
jump=0;
got=0;
if (ce > fl) {
diff=fl/1.0*100.0;
useint=floor((double)i/100.0);
jump-=diff;
}
else {
diff=ce/1.0*100.0;
useint=ceil((double)i/100.0);
jump+=diff;
}

//These are the prime numbers that I had to allow in
if (diff < 20.0 || i==3123 || \
i==3724 || i==6723 ||\
i==7922 || i==8271 || i==8873) {

//These are the false positives (non-primes) that I had to
//forcefully block
if (i!=1597 && i!=5693 && i!=6510 && i!=8496 && i!=4887 && \
i!=4910 && i!=5484 && i!=7419 && i!=7712 && i!=8087 && \
i!=9283 && i!=9518) {
got=1;

printf("%d\n", (int)useint);
add_to_wave(useint);

}
}
else {
nodice++;
}
if (got) {
i+=jump+20;
}
}
}
printf("Cutoff filter set at: 20% (from closest integer)\n");
printf("Successfully filtered: %d\n", nodice);
printf("Number forcefully blocked: 12 (avg. 11.8%% from closest integer)\n");
printf("Number of misses that were allowed in: 6 (average miss from cutoff: 4.7%)\n");
return 0;
}


I am not seeing how you are solving it.


The Riemann hypothesis is part of Problem 8, along with the Goldbach conjecture, in Hilbert's list of 23 unsolved problems, and is also one of the Clay Mathematics Institute Millennium Prize Problems. Since it was formulated, it has withstood concentrated efforts from many outstanding mathematicians. In 1973, Pierre Deligne proved that the Riemann hypothesis held true over finite fields. The full version of the hypothesis remains unsolved, although modern computer calculations have shown that the first 10 trillion zeros lie on the critical line.
http://en.wikipedia.org/wiki/Riemann_hypothesis

You wrote

//These are the false positives (non-primes) that I had to
//forcefully block


If there eixsts false positives, then you have confessed the halting problem for your algorithm.

This means, you will have to run your algorithm forever to prove your theory since you have already confessed failures. There could be other failures out to infinity.

As such, your method is not logically decidable.

Just a hint. If you want to prove something in an algorithm, you must develop and prove your algorithm is a Cauchy sequence that converges to your claimed value or answer. You then use your algorithm for precision of the answer up to the available technology.
 
Just a hint. If you want to prove something in an algorithm, you must develop and prove your algorithm is a Cauchy sequence that converges to your claimed value or answer. You then use your algorithm for precision of the answer up to the available technology.
A Cauchy sequence is a sequence of points which converge in a specific way. You don't call algorithms a Cauchy sequence. You're using common technical terms in an incorrect way again which makes it seem like you are simply trying to deceive people into believing you know more than you do.
 
Just to reinforce: it is just about possible, I suppose, to think of a function as an algorithm - it takes an input and returns an output, though I seriously doubt this is an adequate definition.

Assume so, for argument. Then the sequence of functions ("algorithms") that converges on a unique function/ algorithm will be Cauchy iff this unique function/algorithm is in the space of all such functions/algorithms, roughly speaking.

Notice that this limit is NOT an "output value" of any function/algorithm, but another function/algorithm. As I dimly recall this is at the heart of the definition of a complete (Lebesgue) square-integrable space, aka Hilbert space

As this is Pseudo...... I recall I once sat in on a seminar where the speaker referred to our man as "Cowtchie". Having studied French, I had to leave.....
 
[quote="Quarkhead]
Assume so, for argument. Then the sequence of functions ("algorithms") that converges on a unique function/ algorithm will be Cauchy iff this unique function/algorithm is in the space of all such functions/algorithms, roughly speaking.
[/quote]
I'm not sure I understand: You can have Cauchy sequences in any metric function space, including those that aren't complete...

To be honest, I think Jack_/Vkothii/noodler is mangling the concept of "computable number", which merely says that there's an algorithm to compute the number to arbitrary precision. Under the natural ordering of increasing precision, the results of such an algorithm are, of course, Cauchy, pretty much by definition.

But it is really out of left field for the discussion at hand. And it is followed by "your method is not logically decidable" which is also completely random.
 
You have a fragile, unmaintainable piece of code.
You use many hardwired constants and have unnecessary and never used variables.
Even though you do not output 2, you initialize in the manner of add_to_wave as if called with 2.
You have no discernible methodology and thus your rules seem capricious and poorly suited to be extended. Thus is it the computer science analogue of numerology.
You have a fence-post error which could cause the program to crash in that you access the i+1 element of wave.
You have a typo:
< Number of misses that were allowed in: 6 (average miss from cutoff: 4.7%)
---
> Number of misses that were allowed in: 6 (average miss from cutoff: 24.7%)

Improved, but still capricious code:
Code:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

#define SCALE 100
#define LIMIT 100

static const int scale = SCALE;
static const int limit = LIMIT;
static const int nelements = SCALE * LIMIT + 1;
static double *wave;

static const int scaled_filter = 20;

static int list_ok[] = {
        3123, 3724, 6723, 7922, 8271, 8873,
        -1
} ;

static int list_forbidden[] = {
        1597, 4887, 4910, 5484, 5693, 6510, 7419, 7712, 8087, 8496, 9283, 9518,
        -1
} ;

int in_list(int n, int * l) {
        while (*l > 0 ) {
                if ( *l++ == n ) {
                        return 1;
                }
        }
        return 0;
}

int count_list(int * l) {
        int n = 0;
        while (*l++ > 0 ) {
                n++;
        }
        return n;
}

double stat_list(int * l) {
        int n = 0;
        double sum = 0.0;
        while (*l > 0 ) {
                int x = (*l) % scale;
                if ( 2 * x > scale ) {
                        x = scale - x;
                }
                sum += x;
                n++;
                l++;
        }
        return sum/n;
}


void add_to_wave(double num) {
        int i;
        for (i=0; i < nelements; i++) {
                wave[i]+=fabs((sin(M_PI*((double)i/(double)scale)/num)));
        }
}

int main () {
        int i;
        int nodice=0;

        wave=(double *)calloc(nelements, sizeof(double));
        if ( ! wave ) {
                fprintf(stderr, "Could not allocate %d elements\n", nelements);
                exit(1);
        }

        add_to_wave(2.0);

        for (i=2 * scale + 1; i < 10000 ; i++) {
                if (wave[i] > wave[i+1] && wave[i] > wave[i-1]) {
                        int i_lo = i % scale;
                        int diff_i;
                        int jump;
                        int useint_i;
                        if ( i_lo * 2 < scale ) {
                                diff_i = i_lo ;
                                useint_i = i/scale ;
                                jump = (-diff_i);
                        } else {
                                diff_i = scale - i_lo;
                                useint_i = i/scale + 1;
                                jump = diff_i;
                        }

                        if (diff_i < scaled_filter || in_list(i, list_ok) ) {
                                if ( ! in_list(i, list_forbidden) ) {
                                        printf("%d\n", useint_i);
                                        add_to_wave((double)useint_i);
                                        i += jump + scaled_filter;
                                        continue;

                                }
                        } else {
                                nodice++;
                        }
                }
        }
        printf("Cutoff filter set at: %.0f% (from closest integer)\n", scaled_filter * 100.0 / (double) scale);
        printf("Successfully filtered: %d\n", nodice);
        printf("Number forcefully blocked: %d (avg. %.1f%% from closest integer)\n", count_list(list_forbidden), stat_list(list_forbidden) * 100.0 / scale);
        printf("Number of misses that were allowed in: %d (average miss from cutoff: %.1f%%)\n", count_list(list_ok), stat_list(list_ok) * 100.0 / scale);
        return 0;
}
About 800 times less storage space, and at least 4000 times fewer (and simpler) operations are needed for a simple Sieve of Eratosthenes, which has a verifiable proof that it will always produce primes (as long as the numbers are representable on the machine architecture.)
Code:
#include <stdio.h>
#define LIMIT 100
static const int limit = LIMIT;
static char sieve[LIMIT+1];

int main() {
        int i, j;
        int nodice = 0;
        for(i=0; i <= limit; i ++) {
                sieve[i] = (char) 1;
        }
        for(i=2; i <= limit; i ++) {
                if ( !sieve[i] ) {
                        nodice ++;
                        continue;
                }
                printf("%d\n", i);
                if ( i * i <= limit ) {
                        for(j = 2 * i; j <=limit; j+=i) {
                                sieve[j] = (char) 0;
                        }
                }
        }
        printf("Successfully filtered: %d\n", nodice);
}
 
You have a fragile, unmaintainable piece of code.
You use many hardwired constants and have unnecessary and never used variables.
Even though you do not output 2, you initialize in the manner of add_to_wave as if called with 2.
You have no discernible methodology and thus your rules seem capricious and poorly suited to be extended. Thus is it the computer science analogue of numerology.
You have a fence-post error which could cause the program to crash in that you access the i+1 element of wave.
You have a typo:
< Number of misses that were allowed in: 6 (average miss from cutoff: 4.7%)
---
> Number of misses that were allowed in: 6 (average miss from cutoff: 24.7%)

Improved, but still capricious code:
Code:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

#define SCALE 100
#define LIMIT 100

static const int scale = SCALE;
static const int limit = LIMIT;
static const int nelements = SCALE * LIMIT + 1;
static double *wave;

static const int scaled_filter = 20;

static int list_ok[] = {
        3123, 3724, 6723, 7922, 8271, 8873,
        -1
} ;

static int list_forbidden[] = {
        1597, 4887, 4910, 5484, 5693, 6510, 7419, 7712, 8087, 8496, 9283, 9518,
        -1
} ;

int in_list(int n, int * l) {
        while (*l > 0 ) {
                if ( *l++ == n ) {
                        return 1;
                }
        }
        return 0;
}

int count_list(int * l) {
        int n = 0;
        while (*l++ > 0 ) {
                n++;
        }
        return n;
}

double stat_list(int * l) {
        int n = 0;
        double sum = 0.0;
        while (*l > 0 ) {
                int x = (*l) % scale;
                if ( 2 * x > scale ) {
                        x = scale - x;
                }
                sum += x;
                n++;
                l++;
        }
        return sum/n;
}


void add_to_wave(double num) {
        int i;
        for (i=0; i < nelements; i++) {
                wave[i]+=fabs((sin(M_PI*((double)i/(double)scale)/num)));
        }
}

int main () {
        int i;
        int nodice=0;

        wave=(double *)calloc(nelements, sizeof(double));
        if ( ! wave ) {
                fprintf(stderr, "Could not allocate %d elements\n", nelements);
                exit(1);
        }

        add_to_wave(2.0);

        for (i=2 * scale + 1; i < 10000 ; i++) {
                if (wave[i] > wave[i+1] && wave[i] > wave[i-1]) {
                        int i_lo = i % scale;
                        int diff_i;
                        int jump;
                        int useint_i;
                        if ( i_lo * 2 < scale ) {
                                diff_i = i_lo ;
                                useint_i = i/scale ;
                                jump = (-diff_i);
                        } else {
                                diff_i = scale - i_lo;
                                useint_i = i/scale + 1;
                                jump = diff_i;
                        }

                        if (diff_i < scaled_filter || in_list(i, list_ok) ) {
                                if ( ! in_list(i, list_forbidden) ) {
                                        printf("%d\n", useint_i);
                                        add_to_wave((double)useint_i);
                                        i += jump + scaled_filter;
                                        continue;

                                }
                        } else {
                                nodice++;
                        }
                }
        }
        printf("Cutoff filter set at: %.0f% (from closest integer)\n", scaled_filter * 100.0 / (double) scale);
        printf("Successfully filtered: %d\n", nodice);
        printf("Number forcefully blocked: %d (avg. %.1f%% from closest integer)\n", count_list(list_forbidden), stat_list(list_forbidden) * 100.0 / scale);
        printf("Number of misses that were allowed in: %d (average miss from cutoff: %.1f%%)\n", count_list(list_ok), stat_list(list_ok) * 100.0 / scale);
        return 0;
}
About 800 times less storage space, and at least 4000 times fewer (and simpler) operations are needed for a simple Sieve of Eratosthenes, which has a verifiable proof that it will always produce primes (as long as the numbers are representable on the machine architecture.)
Code:
#include <stdio.h>
#define LIMIT 100
static const int limit = LIMIT;
static char sieve[LIMIT+1];

int main() {
        int i, j;
        int nodice = 0;
        for(i=0; i <= limit; i ++) {
                sieve[i] = (char) 1;
        }
        for(i=2; i <= limit; i ++) {
                if ( !sieve[i] ) {
                        nodice ++;
                        continue;
                }
                printf("%d\n", i);
                if ( i * i <= limit ) {
                        for(j = 2 * i; j <=limit; j+=i) {
                                sieve[j] = (char) 0;
                        }
                }
        }
        printf("Successfully filtered: %d\n", nodice);
}

Wow, I did not know you knew C/C++.
 
rpenner, what kind of arrogant fool are you to fiddle with Godcode? This contains the answers to "Life, The Universe, and Everything". It also contains the answers to "So Long, and Thanks for all the Fish"! ;)
 
A Cauchy sequence is a sequence of points which converge in a specific way. You don't call algorithms a Cauchy sequence. You're using common technical terms in an incorrect way again which makes it seem like you are simply trying to deceive people into believing you know more than you do.

LOL.

Yea, I call them what I want.

It is commonplace in software to prove your answer mathematically as a Cauchy sequence.

Then, your algorithm is guaranteed to provide the correct answer as I said up to the available technology.

I see you are treading into currents you have never been before.

My analysis is perfectly correct.
 
Jack, I'd like to mention that it's an annoying habit of yours to quote large posts of others into your own, many times single-line, responses. This can make scrolling through a thread a painful experience. Do what I do bro, which is just name the person to whom you are directing your response, and take quote snippets if you are referring to specific statements. :)
 
Jack, I'd like to mention that it's an annoying habit of yours to quote large posts of others into your own, many times single-line, responses. This can make scrolling through a thread a painful experience. Do what I do bro, which is just name the person to whom you are directing your response, and take quote snippets if you are referring to specific statements. :)

Heard.
 
Thanks for the responses guys... I've been pretty busy getting all of this worked out, and I think you might be impressed if you just think a little about what I've done...

In that piece of code (which I did in a real hurry, so I know it is a little screwy), I searched for prime numbers in a way that is very much like the Riemann method (a paper I've been puzzling over for a while now) in that I am basically doing a probabilistic search for points of tangency on a Fourier-summed wave.

So here's how it goes...

You have to start out with a seed wave, so you start with 2. You can't start from 1 for reasons I can explain later. So, you seed it with a probability [sine] wave. What it took me a little while to realize is that the more differentiated (fine grained) that you make the wave, the better. This is why I used 1/100-th's of a unit between each number. So imagine this... you have a sine wave bounding along from number 2, and it peaks at the odd numbers while it touches down at the even numbers. It does this absolutely perfectly.

The trick here is that from whatever number you are sitting at (we've started at 2, so that is where we are at the moment), you look for the next horizontal tangent (that is, slope=0) to the sine wave, because these points are the lowest probability of being factorable by two. Obviously, there is a horizontal tangent [exactly] at three, so that is where we go next.

Once we find the next number, we do a cycle of Fourier summing the "3 wave" to the "2 wave" that is already there. Now, let us look at what is going on, and you should start seeing the similarities with the Euler product.

First of all, the 2-wave is just an initial prediction that 1/2 of all of the integers in the number line are prime. This is just a "first guess", but its all that we have to go on for the moment. It's obviously not going to get us very far. But according to number 3's thinking (assuming he doesn't know anything about the 2-wave), 2/3's of future numbers are going to be prime. So all we do here is multiply together these two probability waves, 1/2*2/3, and we are left with the result that 1/3 of all future numbers are going to be prime. Again, this is just a "guess" that is based upon all of the information that is available to us at the moment.

It is important to understand here that there is nothing different between what is going on here and the Euler product (EP=(1-p^s)^-1). The only difference is that the pure probabilities are realized with the inverse of this function, and with the exponent, s, set at 1. So in the end, you cannot deny that me and Riemann have started out from the same basic place.

Now, here's where things get interesting. Once you look at the composite waves of 1/2 and 2/3, you get a funky wave that isn't so perfect anymore. That is, where as we could find the 3 with infinite precision, it is not possible to do this with the next prime. Instead, we have to start doing some stochastic-type filtering, such as setting boundaries within which a horizontal tangent qualifies as being a "prime number hit" while a horiz. tangent outside of these boundaries are "no dice's".

You must really try to understand the significance of all of this. Whereas other methods of finding primes involves things like canceling out all future multiples (like that sieve of erasthones thing) or just doing brute force factoring, or some other trick that involves the properties of the number under consideration, this method is simply the analysis of a Fourier-summed probability wave. In other words, the only things that we know in this program is that a horizontal tangent exists, and that it is within a certain range of some whole number. As long as these conditions hold, then it is, hopefully, a prime number.

But the only problem is this: the further we move along the line, the more chaotic that our composite wave form becomes, and the more likely we are to have "false hits" that have to be filtered out. But this isn't as big a deal as I thought when I wrote the program above, because I've just written another program that actually plots out the composite wave, and those false hits would be really easy to filter out with some stochastics work.

The thing that I was really proud about was that all but 6 of the prime numbers were within the +-20% boundary, and only 2 were outside of the +-24% range. The two primes that were outside were: 82.71 (-29%) and 88.73(-27%), which happen to be consecutive primes, so we might just have been at a particularly nutty part of the composite wave function here.

You must understand that Riemann's goal was not to count individual primes like I am doing, but rather to get an accurate count on the number of primes that are below an arbitrarily large quantity. So, his is a project that deals with long range probabilistics while mine just deals with the probability that the very next "signal" will give us a good hit.

Also, just yesterday, I was comparing the first six of Riemann's zero values with my own hit values. What I did was just divide Riemann's in half (ie a complex number can be thought of as a rectangle in Cartesian coordinates. Since all of his zero's are have a real part of 1/2, you just divide the imaginary value in half to give you the area of the rectangle in question.)

Anyway, when I remembered yesterday that Riemann's first zero was located just above 14.1, I wanted to check the exact "hit value" that I got for 7. It turned out to be 7.19. But when you divide Riemann's first zero-value in half, you get about 7.07. So, I'm high by about .12 here. This value alone, of course, could be sheer coincidence, so I had to check more of my values with his. His next value (cut in half) was about 10.511 and my closest value was 10.97, which was obviously a good hit for the prime at 11. So in this case, I was high by about .46. His next value was 12.5 and my closest to this was my hit at 13.04. So I'm high again by about .54.

Then, we start to run into problems. His value was 15.21, but my closest was under this, by about .55, at 14.66. The only other one around this was my first false positive at 15.97, which was above his by about .7. But there is a good reason why this should be problematic: this was not a prime number! Even though it was technically a "hit" for my fairly crude program, I will be able to give you visual evidence later why we will be able to filter this out from all the rest like a cinch.

Then, Riemann's next number is 16.47, which is pretty much split right in between two of my numbers: 15.97 (again!) and 16.97 (the hit for 17). If we consider the fact that my previous three prime hits were above his number by about the amount that 16.97 is above 16.47, then we've got another interesting match! The final Riemann value that I checked was 18.79, which mine was above this time by about .15 (18.94), which is obviously the hit for 19.

So let me summarize all of this for you. Starting from 7 and ending at 19, my "hit" values are in agreement with the halves of Riemann's values from between .13 to .54 higher than his. An in the only case where this doesn't hold, we have my only "false hit" at 15.97, which we will be able to understand later how easy it would be to filter out. And in between all of this, I had 4 other horizontal tangents that I filtered out because they were outside of the +-20% limit.

In the end, this is all a completely original, yet fully intuitive way to understand the problem involved with counting primes. Again, this is based purely on probabilistic patterns, and we have precisely zero knowledge of the characteristics of the objects under investigation.

You should also realize that my method is interesting because it is conducted purely around the circumference of a circle, rather than along an infinite number line. That is why I was able to use a discovery method using sines. Which, by the way is precisely what Riemann does in his paper. But he comes at the problem from within the context of an infinite line, which, through his ingenious methodology using the complex plane, he is, I think, able to wrap the line into itself, which we can see when he says things like, "If one now considers the integral... from pos. inf. to pos. inf. taken in a positive sense around a domain which includes the value 0 but no other point of discontinuity..." And then he goes about being able to insert a sine function, which he later turns into a cosine (in the function that allows you to find the imaginary value), which is precisely what you use in order to find the horizontal tangents of the sine wave.

This is all pretty interesting. But what gets me really excited is that it is basically a mathematical proof for an original theory of physical reality that I had worked out over a year ago, which did not involve much mathematics. Basically, my theory is a kind of universal scale version of string theory, that, whereas the harmonically oscillating loops of string theory exists on the sub-Plank level, mine are not arbitrarily bounded in any way, except by their own circular forms. These forms, technically speaking, are hyperspheres, or simply three dimensional versions of the very same loops that we have been investigating in my answer to Riemann's hypothesis!

That's it for now... talk to you later!

Dennis Kane
 
This is all pretty interesting. But what gets me really excited is that it is basically a mathematical proof for an original theory of physical reality that I had worked out over a year ago, which did not involve much mathematics. Basically, my theory is a kind of universal scale version of string theory, that, whereas the harmonically oscillating loops of string theory exists on the sub-Plank level, mine are not arbitrarily bounded in any way, except by their own circular forms. These forms, technically speaking, are hyperspheres, or simply three dimensional versions of the very same loops that we have been investigating in my answer to Riemann's hypothesis!
String theory has nothing to do with proving the RH. The zeta function comes up in string theory but the RH is a mathematical statement, you don't need to associate anything in it to the real world. This is yet more evidence you're completely wrong.
 
Originally Posted by DennisK
The trick here is that from whatever number you are sitting at (we've started at 2, so that is where we are at the moment), you look for the next horizontal tangent (that is, slope=0) to the sine wave, because these points are the lowest probability of being factorable by two. Obviously, there is a horizontal tangent [exactly] at three, so that is where we go next.

Here we go again.

You are using finite recursion to try to prove principles about the continuum.

This is not valid mathematics.

In addition, you confess you must manually exclude "false positives" which indicates your finite recursion is not even sound.
 
Back
Top