Sylwester's 'Everlasting theory'

To eliminate maximum number of the luminal neutrinos from the CERN---Gran-Sasso direction, we should apply strong magnetic field to curve the trajectories of muons and charged pions – JUST WE SHOULD SCATTER THEM.

To increase the number density of the superluminal neutrinos, the pulses of neutrinos, so the pulses of protons as well, CANNOT BE TOO SHORT. Moreover, intensity mode of the protons ejected towards graphite neutrino production target SHOULD BE IF POSSIBLE HIGHEST.

We know that in the ICARUS experiment, the above listed conditions were worse than in the original OPERA experiment so there appeared the wrong conclusion that neutrinos cannot be the superluminal particles as well.

I must emphasize once more that the Everlasting Theory PROVES that the time-distance between the neutrino and photon fronts observed on Earth for the supernova SN 1987A follows from the superluminality of great number of neutrinos emitted during the explosion of the supernova.
 
AlphaNumeric, you compromised yourself as well. You wrote that my simple formula for pseudorapidity density will be inconsistent with experimental data (you know, it is the 1.93 for sqrt(sNN) = 2.76 TeV). I predicted the theoretical results for the 5.02 TeV as well i.e. I proved that my Everlasting Theory is not made false by changing.
Thanks for making a straw man by lying about what I've been saying. I didn't say anything about your formula for pseudo-rapidity. When I say your work is inconsistent with data I refer to the example you've previously given where you admit your prediction is outside of experimental bounds but which you then go on to make excuses for. I also refer to the fact you disagree with notions/models central to mainstream physics, namely quarks etc, and yet claim you can accurately derive the value of the strong coupling constant. Numerous times, over several years, I've explained why your statements about how QCD and the SM are all deeply flawed are inconsistent with your claims about the strong coupling constant. You have never been able to address that criticism since the only way to do it is to show your work is able to precisely model the raw data coming out of particle accelerations. Things like the value for $$\alpha_{strong}$$ are computed from raw data using the Standard Model. If the SM is wrong then the value of $$\alpha_{strong}$$ currently in data tables is wrong. So you're self contradicting.

So once more: My simple formula for RELATIVE PSEUDORAPIDITY DENSITY:

X = sqrt(sqrt(E[TeV]/0.2)) = (E[TeV]/0.2)^0.25

IS CONSISTENT WITH ALL EXPERIMENTAL DATA.
No, it isn't and I can say that without even having to know any experimental data. How can I do that? By noting that the units of your equation are such that X has units of energy to 1/4 power, ie $$E^{\frac{1}{4}$$. Pseudo rapidity is dimensionless, it doesn't have units of energy or mass or time or length or any combination. Your expression does, so the value will change when you change units, so if you worked in BTUs, rather than Joules, you'd get a different value for X but X should be independent of your choice of units.

This reduces your claim to numerology, nothing more.

It follows from the atom-like structure of baryons that contrary to the quark model leads to the masses of nucleons.
And if said quark model is wrong then the values for things like the pseudo-rapidity are likely to be wrong too because the value for pseudo-rapidity you claim to explain is determined from raw data using said quark models.

Since you've been unable to grasp this issue with your claims for several years I'll give a hypothetical example. Alice has a model for gravitational and kinetic energies. Gravitational potential energy for a constant gravitational field is mgh. Newtonian kinetic energy is $$\frac{1}{2}mv^{2}$$. So Alice can use energy conservation to work out a value for g by dropping a ball from a known height and measuring its velocity when it hits the ground, giving $$g = \frac{v^{2}}{2h}$$. This is an 'experimental value' for g and Alice had to use a model to determine it because she cannot observe it directly, can only measure the speed and drop height directly. However, suppose someone, Bob, comes along and says "Newton is wrong. Kinetic energy is actually $$mv^{2}$$!". Then Bob would say that $$g = \frac{v^{2}}{h}$$.

Both of them have the same 'raw data', ie the values for h and v, but because they use different models they get different 'experimental values' for g. This is EXACTLY the issue you have. The value of $$\alpha_{strong}$$ is not measured directly by experiments, instead it is extracted from raw data, like particle velocities and electromagnetic field variations, using a model; the Standard Model to be precise. If you think the SM is nonsense then we cannot trust any[/] of the 'experimental values' output by the model processing the raw data, including $$\alpha_{strong}$$. You simultaneously denounce the SM yet claim to have very good agreement with a result dependent upon the SM.

The only way you can demonstrate your claim to explain such physical phenomena accurately is to work with the raw data which comes out of particle colliders. Since you don't have such data you have not justified your position. Instead you've just piled assertions on assertions on assertions. For years.

Moreover, you both do not understand the title of this Section and you both proved many times that you do not understand what you are reading. Just two incompetent persons - I proved it many times!!!!
This subforum being 'alternative theories' doesn't mean they are beyond criticism. You make claims and those claims are flawed. You denounce people who point out such problems with your work, calling us various things pertaining to being ill informed and yet you ignore how the criticism you're hearing is consistent. If myself, Rpenner, Markus etc were really ignorant of physics we'd be giving inconsistent criticisms of your claims yet we all align with one another.

Soon it will be obvious that my predictions for the alpha_strong at high energy are correct as well
Just like 'soon' we'll see faster than light neutrinos? Just like 'soon' the SM will be over turned. Just like 'soon' you'll get your work noticed? How many decades have you been saying 'soon'?

i.e. there will appear the asymptote 0.1139 i.e. my theory will prove that the mainstream asymptotic freedom is incorrect. There still will be the liquid-like plasma, not gas-like plasma defended by AlphaNumeric
Once again you misrepresent me. The fact you're willing to misrepresent me in a discussion I'm part of shows how deep your dishonest goes.

there probably will appear next free parameters or new mathematical tricks.
Just like you have to keep making excuses for your own failures?

But it will not last for ever.
Yes, at some point you'll realise how much of your life you've wasted and you'll go do something constructive. Hopefully it isn't when you're on the death bed but it seems like it'll be that way.

I must emphasize once more that the Everlasting Theory PROVES that the time-distance between the neutrino and photon fronts observed on Earth for the supernova SN 1987A follows from the superluminality of great number of neutrinos emitted during the explosion of the supernova.
Yet more misuse of terminology. No model ever proves anything. A model can only offer an explanation and no amount of predictions or experimental verification will ever make this 'proof'. In this case you're particularly dishonest because there are alternative explanations which do not require superluminal neutrinos.
 
Thanks for making a straw man by lying about what I've been saying. I didn't say anything about your formula for pseudo-rapidity……

Among other things I wrote as follows about the pseudorapidity for sqrt(s) = 2.76 TeV:
“We can find in a little earlier paper the experimental data which differ a little from the 2.17 ± 0.15, for example 2.35 ± 0.15 (May 11, 2012). This means that my result can overlap with more exact experimental result. Can you see that the lower limit in the earlier result (2.20) is greater than in the last (2.02)?”
And your answer was as follows (see your post #188 in this thread):

The fact the second experimental results are lower doesn't mean the results will continue to move down. Rather it would be that when you combine the two data sets from the two experiments you'll find the region now consistent with experiments if made much smaller (more experiments means better data means smaller error bars). In the case of the experiments you mention both of them allow values between 2.20 and 2.32. That's even further from your value of 1.93 than between 2.02 and 2.32 of the second experiment! You're making it more obvious you're wrong!

Now all can see that my result 1.93 overlaps with the new experimental data and that you were wrong!!!! What it means?

I proved once more that you do not remember what you were writing. Just I must once more joke that probably name of your uncle was Alzheimer. You once more proved that you are liar and dishonest person.

Things like the value for $$\alpha_{strong}$$ are computed from raw data using the Standard Model. If the SM is wrong then the value of $$\alpha_{strong}$$ currently in data tables is wrong. So you're self contradicting.

So once more: Due to the renormalization, the Quantum Theory of Fields (QTFs) is mathematically incoherent (Feynman said it). This causes that to fit the theoretical results obtained within the QTFs to experimental data scientists apply many approximations, mathematical tricks and free parameters. This means that we must wait for new experimental data at the very high energy - the transfer of momentum in TeV (pc) must be at least about 1 or 2 TeV. I claim that there will appear asymptote 0.1139. Today we know that the world average value obtained from many experiments for the momentum transfer 91.2 GeV is 0.1184 +- 0.0007. This result OVERLAPS with my theoretical result 0.1176 +- 0.0005. Whereas in the QTFs this value is the FREE PARAMETER!!!!!!

Recapitulation
The QCD fits the theoretical results to experimental data via free parameters.
I cannot change my theory. There are STILL only the 7 initial parameters. Their values are strictly determined. If we will change even one value we destroy whole theory.
I predicted within the Everlasting Theory the pseudorapidity for sqrt(s) = 5.02 TeV - I proved it!!!!

No, it isn't and I can say that without even having to know any experimental data. How can I do that? By noting that the units of your equation are such that X has units of energy to 1/4 power, ie $$E^{\frac{1}{4}$$. Pseudo rapidity is dimensionless, it doesn't have units of energy or mass or time or length or any combination. Your expression does, so the value will change when you change units, so if you worked in BTUs, rather than Joules, you'd get a different value for X but X should be independent of your choice of units.

The cited your sentences show once more your tremendous incompetence.
In my post I wrote as follows: “… RELATIVE pseudorapidity density…”. I wrote also that it is in relation to 0.2 TeV so there appears the RATIO of energies so the result is dimensionless. But in the origin formula for pseudorapidity (see formula (161) in my book) there appears the FACTOR that is not dimensionless (see the whole derivation). You as well should read my paper on vixra then you, probably, would not write the nonsense.

….In this case you're particularly dishonest because there are alternative explanations which do not require superluminal neutrinos.

Can you prove that their interpretation is correct whereas my interpretation is incorrect? Of course, you cannot. My theory of neutrinos follows from the Everlasting Theory that is FREE from approximations, mathematical tricks and free parameters. It shows that my interpretation concerning the speeds of neutrinos emitted in the SN 1987A explosion is MUCH, MUCH MORE CREDIBLE.

AlphaNumeric, I can see that with time your posts are more and more nonsensical. It suggests something.
 
Last edited:
“We can find in a little earlier paper the experimental data which differ a little from the 2.17 ± 0.15, for example 2.35 ± 0.15 (May 11, 2012). This means that my result can overlap with more exact experimental result. Can you see that the lower limit in the earlier result (2.20) is greater than in the last (2.02)?”
And your answer was as follows (see your post #188 in this thread):

Now all can see that my result 1.93 overlaps with the new experimental data and that you were wrong!!!! What it means?
1.93 is not in either range.

Furthermore you're making an invalid assumption about what the ranges represent and how they are constructed. It is a standard practice in science for multiple experiments, often by different teams at different places using different methods, are done to try to measure a particular physical property. They will often not perfectly align, such as Experiment A giving a range of (for example) [0,0.75] for some physical property X and Experiment B giving a range [0.6,1.1]. Almost all such ranges are not hard and fast, they represent confident intervals, ie the experiment is 95% or 99% or 99.99% (it will be stated in the published paper) confident that the true value lies somewhere in the stated range. The higher the confidence the wider the range. For multiple experiments the region of particular importance is the overlap region, ie the range of values which is within all of the confidence limits given. For example in the Experiment A and B I just said the range [0.6,0.75] is common to both experiments. In generality an experimental publication will give a distribution of likelihood for the value of X, akin to a probability distribution saying "We're Z percent confident X lies between a and b".

Now the issue of multiple experiments. Your conclusion about how the range is moving downwards somehow justifies your 'prediction' is flawed. The order is immaterial, the way of combining data from multiple experiments is not dependent upon the chronological order in which they were done. This procedure of combining information from multiple experiments is known as data fusion and is an important thing for experimental scientists to understand. To illustrate this consider the following picture :

LHC-Mass-Combination.png


That's some quantity within the Standard Model determined from experiments and modelling. Each row represents a different experiment used to measure the quantity, with the range for each experiment shown as the blue and red bars, with the red circles being the range mean. The rows are not in chronological order but even if you reshuffle them to be in chronological order the ranges shift left and right as you go from experiment to experiment. What matters is not the trend of this shifting but where they all overlap and how the total data collected combined into a more smaller range due to increased confidence (more experiments means more data means better statistics). Specifically notice how the first and fourth experiments, both from 2010, are the furthest left and right, respectively. Suppose we only considered those two, Experiment 1 and Experiment 4. If Experiment 1 was done before Experiment 4 then by your logic we'd expect future experiments to move further to the right of Experiment 4, into the $$m_{top} > 185 GeV$$ range, as that would be the 'trend' you claim is going on with pseudo-rapidity. Conversely if Experiment 4 were done before Experiment 1 then your logic says future experiments should be further to the left, into the $$m_{top} < 160 GeV$$ range. Neither of those things occurred, future experiments didn't continue the trend. Instead future experiments began zeroing in on the region of overlap between Experiments 1 and 4, with the error bars shrinking as time passed due to accumulation of data.

Such diagrams are common place in any experimental domain of science, particularly in collider physics. If you had ever had the intellectual honesty to go find out about experimental methodologies, particularly after my repeated explanation of why you need to deal with raw data and not data processed through the Standard Model, as $$\alpha_{strong}$$ is, then you'd have seen such things. You'd know how fluctuations in range, both in terms of the end points and their mean, are to be expected. Hell, if you knew even the most rudimentary pieces of statistics you'd know to expect this, it is taught in high school. I'll elaborate on that since you no doubt have no idea what I'm referring to.

Suppose you want to measure some physical quantity X, whose true value is $$X_{0}$$, but your measuring method is imperfect and suffers from noise which let's assume is Gaussian, $$\epsilon \sim \mathcal{N}(0,\sigma^{2})$$, ie you end up measuring $$X+\epsilon$$ for $$\epsilon$$ said random variable. You take a large number, N = PQ, measurements and split them into P lots of sets of size Q. We obviously expect the N samples to fluctuate about $$X = X_{0}$$ in a manner consistent with the $$\sigma^{2}$$ variance but we're only able to access the sample mean and standard deviation, nothing more. To now put this in context in regards to experimental ranges, overlapping and fluctuating we consider each of the P sets in turn. For set $$S_{i}$$ we compute the sample mean $$\langle S_{i} \rangle = s_{i}$$. It is a basic result within statistics that the sample means $$s_{i}$$ themselves follow a Gaussian distribution with mean $$X_{0}$$.

Since you no doubt don't understand that either I'll explain it in words too. We have P different experimental teams, each of which makes Q measurements of X. Each then computes the mean value, $$s_{i}$$ and can give a range too, $$\sigma_{i}^{2}$$. For large P (ie so we can get good statistics) we expect there to be approximately as many $$s_{i} < X_{0}$$ as there are $$s_{i} > X_{0}$$, with the spread following said Gaussian. This means if we received each of the $$s_{i}$$ values, along with the sample variance to draw the confident interval bars, in turn they fluctuate about the true value $$X_{0}$$, sometimes greater than, sometimes less than $$X_{0}$$ and build up a Gaussian sample set.

This is an illustration of the flaw in your 'logic' that the fact the second experimental range is less than (but still overlapping) the first experimental range implies a slide down to your 1.93 value. In the example I just gave the Gaussian property makes it simple to do analytically, simple enough for children to learn it in school. Of course this doesn't mean the pseudo-rapidity measurements necessary follow Gaussian fluctuations but that isn't necessary for the result I just illustrated, thanks to the central limit theorem.

Alternatively your flawed logic can be seen by experiment. Get 2 dice and roll them 100 times, recording their total, $$X_{1}+X_{2}$$. Then split the 100 results into 10 lots of 10. For each set of 10 work out the range, ie lowest to highest. Then draw a diagram like the one I just posted, with each of the 10 ranges. Then draw a vertical line at 7, since $$\langle X_{1} + X_{2} \rangle = 7$$ for two fair dice. You'll see some of the ranges are mostly to the left of 7 and others mostly to the right. Some might not even overlap but this is an artefact of poor statistics due to small sample size and the use of range rather than standard deviation (I'm not sure if you're able to compute that, given your general poor maths skills). The order in which you list the 10 sets is irrelevant too, which is unlike your implicit assumption about the role of chronological order of experiments in overall results.

So it would seem you have once again shown the major gaps in your understanding of even the most basic relevant subjects. You obviously haven't ever worked with experimental data in any practical way or even understood the methodology of statistical analysis, never mind the quantitative details in regards to distributions of sample means. It may well be the case that future experiments continue to move the range downwards, implying the two experimental values given thus far are actually statistical outliers, but this is not a certainty, particularly given the large number of individual measurements which go into each published range. Regardless, your assertion that there is undoubtedly a downwards trend is false, you're making a carte blanche statement about statistics which just isn't valid.

Now all can see that my result 1.93 overlaps with the new experimental data and that you were wrong!!!! What it means?
One range is [2.02,2.32] and the other is [2.2,2.5]. Neither of those includes 1.93. If those intervals represent 95% confidence limits (which is generally the case in experimental physics) then 1.93 is extremely unlikely to be consistent with the data.

I proved once more that you do not remember what you were writing. Just I must once more joke that probably name of your uncle was Alzheimer. You once more proved that you are liar and dishonest person.
I remembered what I wrote correctly and my point is valid. You call me dishonest yet you just said "my result 1.93 overlaps with the new experimental data" when it doesn't, 1.93 is not in [2.02,2.32] nor [2.2,2.5]. Funny how you insinuate a neurological degenerative problem in me when you can say such easily exposed lies as that. As I said, I remembered what I said correctly, the point was correct and I've now elaborated in detail why your assertion there is necessarily a downward trend is flawed. All using statistics learn in high school, never mind some niche corner of graduate level mathematics.

So once more: Due to the renormalization, the Quantum Theory of Fields (QTFs) is mathematically incoherent (Feynman said it).
I do like it when hacks who denounce the mainstream use mainstream physicists as quote sources to justify themselves when they need to. Always a bit of amusing hypocrisy. The mathematics of renormalisation has been put on much firmer rigorous ground since Feynmann died in 1988. And while it is not an all powerful perfect formalism it has clear practical utility; the most accurately tested model of science ever created by Man is quantum electrodynamics, which Feynmann himself helped develop.

This causes that to fit the theoretical results obtained within the QTFs to experimental data scientists apply many approximations, mathematical tricks and free parameters.
Sylwester, remember who you're talking to. You might be able to con your friends and family into thinking you understand quantum field theory or have experience with its inner workings but we both know you don't. Some of us can actually do some calculations with QFT. I'm certain I know more about various issues within quantum field theory than you but I am also more familiar with the powerful applicability of it too.

Also, renormalisation is precisely about preventing the proliferation of parameters. If a model is non-renormalisable then it needs infinitely many parameters to fit data, while a renormalisable model requires only a finite amount.

In my post I wrote as follows: “… RELATIVE pseudorapidity density…”. I wrote also that it is in relation to 0.2 TeV so there appears the RATIO of energies so the result is dimensionless. But in the origin formula for pseudorapidity (see formula (161) in my book) there appears the FACTOR that is not dimensionless (see the whole derivation). You as well should read my paper on vixra then you, probably, would not write the nonsense.
Ah, so it is my fault you cannot even present your work properly. Quite.

As for your 'paper' it isn't a 'paper' in the proper peer reviewed and published sense, it's a document you just put on a hosting website. Still can't get a journal to publish your work eh? Besides, I have plenty of other papers I have to read, some of us need to be able to produce results, not rhetoric on a forum.

Can you prove that their interpretation is correct whereas my interpretation is incorrect? Of course, you cannot.
Where did I say that? I didn't. I said it is dishonest of you to assert that particular experimental results demonstrate superluminal neutrinos when alternative explanations are still on the table. If both model A and model B explain experiment X then it is dishonest to say "Experiment X shows model A is true". No, that would only be the case if all alternatives were excluded. Explanations for the 1987 supernova which do not require superluminal neutrinos exist and so the event cannot act as confirmation of your claims, only exclude some others that are inconsistent with the event. Obviously this subtle issue is something you struggle to grasp.

My theory of neutrinos follows from the Everlasting Theory that is FREE from approximations, mathematical tricks and free parameters. It shows that my interpretation concerning the speeds of neutrinos emitted in the SN 1987A explosion is MUCH, MUCH MORE CREDIBLE.
So why haven't you gotten it published yet? Are you not sending it to journals? Why are you still on this forum spewing out rhetoric?

AlphaNumeric, I can see that with time your posts are more and more nonsensical. It suggests something.
It suggests your reading skills are rather poor. You obviously have no problem insinuating Alzheimers or schizophrenia in people, specifically me, which speaks volumes about the kind of person you are. Well done, you've spent 3 decades touting your nonsense and it has lead to you being stuck in the pseudo-science section of a forum insinuating mental health problems in others. Perhaps you're just trying to be provocative so I'll reply. After all, if I didn't reply to you you'd be completely ignored by everyone. My my, haven't those 30 years been fruitful.
 
1.93 is not in either range.

Furthermore you're making an invalid assumption about what the ranges represent and how they are constructed. It is a standard practice in science for multiple experiments, often by different teams at different places using different methods, are done to try to measure a particular physical property. They will often not perfectly align, such as Experiment A giving a range of (for example) [0,0.75] for some physical property X and Experiment B giving a range [0.6,1.1]. Almost all such ranges are not hard and fast, they represent confident intervals, ie the experiment is 95% or 99% or 99.99% (it will be stated in the published paper) confident that the true value lies somewhere in the stated range. The higher the confidence the wider the range. For multiple experiments the region of particular importance is the overlap region, ie the range of values which is within all of the confidence limits given. For example in the Experiment A and B I just said the range [0.6,0.75] is common to both experiments. In generality an experimental publication will give a distribution of likelihood for the value of X, akin to a probability distribution saying "We're Z percent confident X lies between a and b".

Now the issue of multiple experiments. Your conclusion about how the range is moving downwards somehow justifies your 'prediction' is flawed. The order is immaterial, the way of combining data from multiple experiments is not dependent upon the chronological order in which they were done. This procedure of combining information from multiple experiments is known as data fusion and is an important thing for experimental scientists to understand. To illustrate this consider the following picture :

LHC-Mass-Combination.png


That's some quantity within the Standard Model determined from experiments and modelling. Each row represents a different experiment used to measure the quantity, with the range for each experiment shown as the blue and red bars, with the red circles being the range mean. The rows are not in chronological order but even if you reshuffle them to be in chronological order the ranges shift left and right as you go from experiment to experiment. What matters is not the trend of this shifting but where they all overlap and how the total data collected combined into a more smaller range due to increased confidence (more experiments means more data means better statistics). Specifically notice how the first and fourth experiments, both from 2010, are the furthest left and right, respectively. Suppose we only considered those two, Experiment 1 and Experiment 4. If Experiment 1 was done before Experiment 4 then by your logic we'd expect future experiments to move further to the right of Experiment 4, into the $$m_{top} > 185 GeV$$ range, as that would be the 'trend' you claim is going on with pseudo-rapidity. Conversely if Experiment 4 were done before Experiment 1 then your logic says future experiments should be further to the left, into the $$m_{top} < 160 GeV$$ range. Neither of those things occurred, future experiments didn't continue the trend. Instead future experiments began zeroing in on the region of overlap between Experiments 1 and 4, with the error bars shrinking as time passed due to accumulation of data.

Such diagrams are common place in any experimental domain of science, particularly in collider physics. If you had ever had the intellectual honesty to go find out about experimental methodologies, particularly after my repeated explanation of why you need to deal with raw data and not data processed through the Standard Model, as $$\alpha_{strong}$$ is, then you'd have seen such things. You'd know how fluctuations in range, both in terms of the end points and their mean, are to be expected. Hell, if you knew even the most rudimentary pieces of statistics you'd know to expect this, it is taught in high school. I'll elaborate on that since you no doubt have no idea what I'm referring to.

Suppose you want to measure some physical quantity X, whose true value is $$X_{0}$$, but your measuring method is imperfect and suffers from noise which let's assume is Gaussian, $$\epsilon \sim \mathcal{N}(0,\sigma^{2})$$, ie you end up measuring $$X+\epsilon$$ for $$\epsilon$$ said random variable. You take a large number, N = PQ, measurements and split them into P lots of sets of size Q. We obviously expect the N samples to fluctuate about $$X = X_{0}$$ in a manner consistent with the $$\sigma^{2}$$ variance but we're only able to access the sample mean and standard deviation, nothing more. To now put this in context in regards to experimental ranges, overlapping and fluctuating we consider each of the P sets in turn. For set $$S_{i}$$ we compute the sample mean $$\langle S_{i} \rangle = s_{i}$$. It is a basic result within statistics that the sample means $$s_{i}$$ themselves follow a Gaussian distribution with mean $$X_{0}$$.

Since you no doubt don't understand that either I'll explain it in words too. We have P different experimental teams, each of which makes Q measurements of X. Each then computes the mean value, $$s_{i}$$ and can give a range too, $$\sigma_{i}^{2}$$. For large P (ie so we can get good statistics) we expect there to be approximately as many $$s_{i} < X_{0}$$ as there are $$s_{i} > X_{0}$$, with the spread following said Gaussian. This means if we received each of the $$s_{i}$$ values, along with the sample variance to draw the confident interval bars, in turn they fluctuate about the true value $$X_{0}$$, sometimes greater than, sometimes less than $$X_{0}$$ and build up a Gaussian sample set.

This is an illustration of the flaw in your 'logic' that the fact the second experimental range is less than (but still overlapping) the first experimental range implies a slide down to your 1.93 value. In the example I just gave the Gaussian property makes it simple to do analytically, simple enough for children to learn it in school. Of course this doesn't mean the pseudo-rapidity measurements necessary follow Gaussian fluctuations but that isn't necessary for the result I just illustrated, thanks to the central limit theorem.

Alternatively your flawed logic can be seen by experiment. Get 2 dice and roll them 100 times, recording their total, $$X_{1}+X_{2}$$. Then split the 100 results into 10 lots of 10. For each set of 10 work out the range, ie lowest to highest. Then draw a diagram like the one I just posted, with each of the 10 ranges. Then draw a vertical line at 7, since $$\langle X_{1} + X_{2} \rangle = 7$$ for two fair dice. You'll see some of the ranges are mostly to the left of 7 and others mostly to the right. Some might not even overlap but this is an artefact of poor statistics due to small sample size and the use of range rather than standard deviation (I'm not sure if you're able to compute that, given your general poor maths skills). The order in which you list the 10 sets is irrelevant too, which is unlike your implicit assumption about the role of chronological order of experiments in overall results.

So it would seem you have once again shown the major gaps in your understanding of even the most basic relevant subjects. You obviously haven't ever worked with experimental data in any practical way or even understood the methodology of statistical analysis, never mind the quantitative details in regards to distributions of sample means. It may well be the case that future experiments continue to move the range downwards, implying the two experimental values given thus far are actually statistical outliers, but this is not a certainty, particularly given the large number of individual measurements which go into each published range. Regardless, your assertion that there is undoubtedly a downwards trend is false, you're making a carte blanche statement about statistics which just isn't valid.

One range is [2.02,2.32] and the other is [2.2,2.5]. Neither of those includes 1.93. If those intervals represent 95% confidence limits (which is generally the case in experimental physics) then 1.93 is extremely unlikely to be consistent with the data.

I remembered what I wrote correctly and my point is valid. You call me dishonest yet you just said "my result 1.93 overlaps with the new experimental data" when it doesn't, 1.93 is not in [2.02,2.32] nor [2.2,2.5]. Funny how you insinuate a neurological degenerative problem in me when you can say such easily exposed lies as that. As I said, I remembered what I said correctly, the point was correct and I've now elaborated in detail why your assertion there is necessarily a downward trend is flawed. All using statistics learn in high school, never mind some niche corner of graduate level mathematics.

I do like it when hacks who denounce the mainstream use mainstream physicists as quote sources to justify themselves when they need to. Always a bit of amusing hypocrisy. The mathematics of renormalisation has been put on much firmer rigorous ground since Feynmann died in 1988. And while it is not an all powerful perfect formalism it has clear practical utility; the most accurately tested model of science ever created by Man is quantum electrodynamics, which Feynmann himself helped develop.

Sylwester, remember who you're talking to. You might be able to con your friends and family into thinking you understand quantum field theory or have experience with its inner workings but we both know you don't. Some of us can actually do some calculations with QFT. I'm certain I know more about various issues within quantum field theory than you but I am also more familiar with the powerful applicability of it too.

Also, renormalisation is precisely about preventing the proliferation of parameters. If a model is non-renormalisable then it needs infinitely many parameters to fit data, while a renormalisable model requires only a finite amount.

Ah, so it is my fault you cannot even present your work properly. Quite.

As for your 'paper' it isn't a 'paper' in the proper peer reviewed and published sense, it's a document you just put on a hosting website. Still can't get a journal to publish your work eh? Besides, I have plenty of other papers I have to read, some of us need to be able to produce results, not rhetoric on a forum.

Where did I say that? I didn't. I said it is dishonest of you to assert that particular experimental results demonstrate superluminal neutrinos when alternative explanations are still on the table. If both model A and model B explain experiment X then it is dishonest to say "Experiment X shows model A is true". No, that would only be the case if all alternatives were excluded. Explanations for the 1987 supernova which do not require superluminal neutrinos exist and so the event cannot act as confirmation of your claims, only exclude some others that are inconsistent with the event. Obviously this subtle issue is something you struggle to grasp.

So why haven't you gotten it published yet? Are you not sending it to journals? Why are you still on this forum spewing out rhetoric?

It suggests your reading skills are rather poor. You obviously have no problem insinuating Alzheimers or schizophrenia in people, specifically me, which speaks volumes about the kind of person you are. Well done, you've spent 3 decades touting your nonsense and it has lead to you being stuck in the pseudo-science section of a forum insinuating mental health problems in others. Perhaps you're just trying to be provocative so I'll reply. After all, if I didn't reply to you you'd be completely ignored by everyone. My my, haven't those 30 years been fruitful.

Good post on explaining what an experimental error bar represents. Hard to believe you have to explain that to Kornowski.
 
AlphaNumeric, I proved many times that you completely do not understand basic problems in particle physics. You desperately try to be right but you are not. I proved that you did not understand the difference between the asymptotic freedom and confinement. You completely do not understand that all unsolved basic problems in particle physics follow from the wrong assumptions that the bare fermions are the sizeless points or vibrating Planck-size closed strings that have spin, mass, sometime charge and so on. Such idiotic assumption follows from incompetence of authors of such assumption. It causes that we still cannot define exact masses of the up and down quarks so we cannot calculate the masses of the FUNDAMENTAL BLOCKS of Nature i.e. of protons and neutrons. Just the today system of education hebetates the new generations of physicists and you are the victim of such system.

Moreover, you do not understand what you are reading. I wrote that there is no doubt that the new more precise data will be closer to my result 1.93 and it is true (I cited the new paper). I wrote it because I am convinced that my Everlasting Theory, based on the phase transitions of the modified Higgs field and the Titius-Bode law for the strong interactions, is correct. On the other hand, you wrote about the obvious things because you assume that readers cannot see your OBVIOUS mistakes. It is obvious that in different experiments accuracy is different so the experimental data are different as well. It causes that we say about the WORLD AVERAGE VALUE. I wrote about it!!!! Whereas you wrote the very long post in which you suggest that I do not understand the problem - just you are dishonest. So once more: You wrote as follows.

Thanks for making a straw man by lying about what I've been saying. I didn't say anything about your formula for pseudo-rapidity……

Among other things I wrote as follows about the pseudorapidity for sqrt(s) = 2.76 TeV:
“We can find in a little earlier paper the experimental data which differ a little from the 2.17 ± 0.15, for example 2.35 ± 0.15 (May 11, 2012). This means that my result can overlap with MORE EXACT experimental result. Can you see that the lower limit in the earlier result (2.20) is greater than in the last (2.02)?”

The words “more exact experimental result” mean that the systematic and statistical uncertainties are smaller.

And your answer was as follows (see your post #188 in this thread):

The fact the second experimental results are lower doesn't mean the results will continue to move down. Rather it would be that when you combine the two data sets from the two experiments you'll find the region now consistent with experiments if made much smaller (more experiments means better data means smaller error bars). In the case of the experiments you mention both of them allow values between 2.20 and 2.32. That's even further from your value of 1.93 than between 2.02 and 2.32 of the second experiment! You're making it more obvious you're wrong!

Now all can see that my result 1.93 overlaps with the new experimental data (you PURPOSELY omitted these results in your last post to show that you are right - just you are dishonest) and that you were wrong!!!!

It is the reason that I wrote something about Alzheimer. Just I proved ONCE MORE that you do not remember what you are writing.

In many scientific papers and books you can read that the Quantum Theory of Fields is at least the incomplete theory so there appear the approximations, mathematical tricks and free parameters as, for example, the alpha_strong for the mass of the Z boson TAKEN FROM EXPERIMENT or the normalization Z and mass and charge of electron in the QED. I only cite the scientists. Just you should read more.

You proved once more that discussion with you is useless because you behave as a fanatic.
 
AlphaNumeric, I proved many times that you completely do not understand basic problems in particle physics. You desperately try to be right but you are not.
When you get your work past peer review and someone pays you to do mathematics and physics let me know. Until then reality disagrees with you, I'm a professional researcher.

I proved that you did not understand the difference between the asymptotic freedom and confinement. [/quote]Anyone who reads our 'discussions' on them will see otherwise.

You completely do not understand that all unsolved basic problems in particle physics follow from the wrong assumptions that the bare fermions are the sizeless points or vibrating Planck-size closed strings that have spin, mass, sometime charge and so on. Such idiotic assumption follows from incompetence of authors of such assumption. It causes that we still cannot define exact masses of the up and down quarks so we cannot calculate the masses of the FUNDAMENTAL BLOCKS of Nature i.e. of protons and neutrons. Just the today system of education hebetates the new generations of physicists and you are the victim of such system.

Moreover, you do not understand what you are reading. I wrote that there is no doubt that the new more precise data will be closer to my result 1.93 and it is true (I cited the new paper). I wrote it because I am convinced that my Everlasting Theory, based on the phase transitions of the modified Higgs field and the Titius-Bode law for the strong interactions, is correct. On the other hand, you wrote about the obvious things because you assume that readers cannot see your OBVIOUS mistakes. It is obvious that in different experiments accuracy is different so the experimental data are different as well. It causes that we say about the WORLD AVERAGE VALUE.
Just vapid assertions and self promotion. I've heard your nonsense, I don't need to hear it again seeing as you're not providing anything new.

Among other things I wrote as follows about the pseudorapidity for sqrt(s) = 2.76 TeV:
“We can find in a little earlier paper the experimental data which differ a little from the 2.17 ± 0.15, for example 2.35 ± 0.15 (May 11, 2012). This means that my result can overlap with MORE EXACT experimental result. Can you see that the lower limit in the earlier result (2.20) is greater than in the last (2.02)?”

The words “more exact experimental result” mean that the systematic and statistical uncertainties are smaller.

And your answer was as follows (see your post #188 in this thread):

Now all can see that my result 1.93 overlaps with the new experimental data (you PURPOSELY omitted these results in your last post to show that you are right - just you are dishonest) and that you were wrong!!!![/quote]How is 1.93 inside 2.17 ± 0.15 or 2.35 ± 0.15? It is inside neither. How am I wrong in saying that? You do realise that 2.35 ± 0.15 is 2.2 to 2.5 and 2.17 ± 0.15 is 2.02 to 2.32, right? Neither of them include 1.93.

It is the reason that I wrote something about Alzheimer. Just I proved ONCE MORE that you do not remember what you are writing.
No, if I didn't remember what I'd written I'd be forgetful or mistaken. Implying I have a degenerative terminal neurological disorder is just being a jackass without merit. If I called you retarded for not seeing how 1.93 isn't between 2.02 and 2.32 or 2.2 and 2.5 would I be justified in that assessment? No. Would I be justified in calling you retarded for needing the same thing explained to you many times over many years and you still don't get it? No.

It's a sign you have nothing to retort my comments with, all you can do is ignore how I keep pointing out you've accomplished nothing in 30 years. If you really think I have Alzheimers and/or schizophrenia it must be really unpleasant for you to see how, in that same time frame, I have accomplished more in science than you. 30 years ago I wasn't even born! You had a huge headstart and despite all your claims of how right you are you've accomplished nothing. Beaten by someone young enough to be your child and who, according to you, suffers from multiple neurological afflictions. Yep, you really are a massive success story Sylwester. :rolleyes:

In many scientific papers and books you can read that the Quantum Theory of Fields is at least the incomplete theory so there appear the approximations, mathematical tricks and free parameters
I never claimed its perfect. In fact I've said on this forum I expect it to be replaced with something entirely different in its formalisation at some point. The fact it won't be your work doesn't mean I don't think it might be something else.

as, for example, the alpha_strong for the mass of the Z boson TAKEN FROM EXPERIMENT or the normalization Z and mass and charge of electron in the QED. I only cite the scientists.
I've explained it again and again and again and you still don't get it. Here is ANOTHER example.

Suppose I wish to experimentally measure the value of G, Newton's constant. It cannot be measured directly, unlike length or volume, it has to be computed by analysing what we can measure. For example, in Newtonian gravity it is defined by $$F = G\frac{M_[1}M_{2}}{r^{2}}$$ and so I know that the force between two objects of known spherical masses, $$M_{1},M_{2}$$, a known distance apart $$r$$ apart will experience force F according to Newton and I can therefore measure G by computing $$\frac{Fr^{2}}{M_{1}M_{2}$$. Each of those I can measure and so I can compute G, seeing as I cannot measure it directly. But what if I used Einstein's work instead, where now $$R_{ab} - \frac{1}{2}Rg_{ab} = 8\pi G T_{ab}$$. Now I should solve the relevant equations for the metric, compute the geodesic equation equation of motion and use it to compute G from the measurements I made.

Now for the central point, I'll get a different answer. Why? After all I'm using the same experimental data? It's because I'm using different models to interpret the data, to extract out some quantity I cannot directly measure. Einstein and Newton don't perfectly agree, they'll process experimental data differently. This is exactly as if I tried to work out the fine structure constant using non-relativistic quantum mechanics to interpret particle scattering experimental data, compared to quantum electrodynamics. Hell, if the particles collider fast enough I will need to include electroweak and strong force corrections so using the full Standard Model will give me a third answer. Despite the experimental data being the same in each of the three cases the models handle the data differently and thus I'd conclude different values for the fine structure constant. This is why particle physicists are often obsessed with computing 2 or 3 or 4 order loop corrections to scattering processes, leaving out or including certain contributions to a scattering process calculation will lead to different conclusions about the value of things we cannot measure directly, such as the fine structure constant.

This is particularly clear for the electron's anomalous magnetic moment, often called g. According to tree level QED the value of g is EXACTLY 2. Don't need any data, it is absolutely definitely exactly 2. But it isn't. It's very close but it isn't quite 2. But if I used tree level QED to 'measure' it I'd get the wrong answer. Adding in quantum field theoretic corrections to the scattering process leads to a corrected value for g, with $$g = 2 + \frac{\alpha}{\pi}$$. Adding in 2 loop or beyond alters the prediction again, further refining it. Different 'models' give different values. Therefore any quantity whose experimental measurement relies on the value of g will change as our modelling improves, even if the data doesn't.

You claim the SM is flawed to its core. Quarks are 'shames'. If that is so then any quantity who cannot directly measure, like a length or time or energy, may have a completely different value in reality, as we're extracting the values from the data using a flawed model. If we extracted G from data using Newtonian gravity we'd introduce an error which wouldn't be there if we used Einstein (or at least Einstein would give a smaller error). You simultaneously denounce quarks while claiming you can predict something whose experimental measurement relies on our quark models.

Just you should read more.
I'm certain I read and do more science than you. I get paid to do it, it is how I put food on the table.

You proved once more that discussion with you is useless because you behave as a fanatic.
Funny how I'm able to have discussions with others in other threads. Look at how brucep thanked me for the explanation of how experimental data is handled using statistical methods. Look at how I have plenty of discussions elsewhere on this forum. You call me a fanatic but I don't say, since I don't believe such a thing, that the current state of theoretical physics is close to 'complete'. Some day something will come along and replace quantum field theory and general relativity and wrap up all of theoretical physics domains into a single unified framework. If I'm lucky I'll still be alive when that happens. I don't care who does it, Witten, Hawking, a team of researchers from Cambridge or MIT or just some shmuck who first posts it on an internet forum and ViXra. What I care about is intellectual honesty, sound reasoning, honest discussion and well presented ideas.

You provide none of these things. You constantly misrepresent me. You claim " proved that you did not understand the difference between the asymptotic freedom and confinement" but what you really mean is that you have your own meanings for those concepts and since I explained the mainstream notions of them to you, when you made it clear on PhysForums you didn't realise there is a difference, I therefore don't align my views with your claims and since you claim you're certainly right I must therefore be wrong. This and the many other "Sylwester defined" versions of things like 'effective theory', 'string theory', 'm theory', 'T duality', 'neutrinos' etc speak volumes about you. The fact you ignore the parts of my posts where I ask you why aren't you sending your work to reputable journals, rather than hanging around on a forum like this, shows you know you're going nowhere.

If you really think I have numerous neurological issues why do you come here? No one else replies to you and no amount of forum posts will get your work published or taken seriously. You believe you're right yet you're doing nothing to move things forward. Why spend your time writing papers to submit to journals or getting yourself some university level qualification and then a PhD place, to get your foot in the door? Why not do that instead of coming here to converse with someone you think is a liar and suffering from Alzheimers? Is this seriously the action of someone who really believes they have an understanding of physics superior to all? None of the people on this forum who have work published in reputable journals posts their work here and certainly not in the manner you do. Even if every single person on this forum, including me, were convinced by your claims you'd still have gotten no closer to getting your work published in a reputable journal and into the research community.

So why are you here? I'd wager because deep down you know you cannot get your work taken seriously by the mainstream community because it's too riddled with nonsense, dishonesty and ignorance. If you really think I have Alzheimers the facts that the only person who says anything to you is me and that you keep coming back, despite saying discussion with me is "useless", then your actions speak loud and clear about how much you've accomplished, or rather failed to accomplish. As an honest piece of advice, go do something else. Go travelling, learn another language, take up painting. Why? Because you clearly aren't going anywhere in physics.
 
So why are you here?

With 'here' I presume you refer to the Fringe section. The answer is simple : because he's a crank. No one is taking him seriously, anywhere. All the forums he popped up on either banned him outright, or confined him to the "Alternatives" area, or the trash can. And for good reason.

My question would be - why do we even respond to this clown ? Just let him and his nonsense die a natural death. It is not like he is prepared to listen and learn.

What do you want to bet that some comment about "bigotry" will result from this post of mine...? :)
 
With 'here' I presume you refer to the Fringe section. The answer is simple : because he's a crank. No one is taking him seriously, anywhere. All the forums he popped up on either banned him outright, or confined him to the "Alternatives" area, or the trash can. And for good reason.
I meant on a forum in general. He is in this part of the forum because he isn't allowed to post his nonsense in the main maths & physics section. The question is why he's spending so much time posting his work on a forum. When I was doing my PhD and writing papers I didn't post the work here, why would I want to? After all very few people here have the knowledge sufficient to discuss research level physics or mathematics. Hell, Ben and I did very similar PhDs (never met but we'd met people who had met the other) and we didn't really follow one another's work very much. Posting work you claim is research level on a forum is pointless, even if it is right almost no one can really discuss it with you in any meaningful way. The only reason to post your own work in any significant way is to try to stroke your ego, to try to convince others you're oh so smart. Too bad most hacks find this back fires on them, as what they consider complicated is often laughably easy for actual physicists and mathematicians. When I post the most basic things from relativity hacks here claim I'm showing off or just posting complicated mathematics, unaware its stuff undergrads should know inside out.

If Sylwester were really and truly about seriously getting into the research community he'd not be posting his work here like he does. He'd not spend his time complaining that someone he thinks has Alzheimers is the only person talking to him. He'd be submitting his work to journals and then iterating his work based on the comments from the journals. That's the only way he's going to get anywhere, no amount of forum posting will manage that. But he doesn't do that, he spends his time posting on forums and going nowhere. Like I have said, he's been spewing this nonsense for 30 years. I'm 29, I was born after he started his nonsense, and look how little he's accomplished.
 
AlphaNumeric, I proved that you did not understand the difference between the asymptotic freedom and confinement. You claim as follows.

Anyone who reads our 'discussions' on them will see otherwise.

This means that I must once more cite your nonsensical ‘explanations’. In your post #214 (see page 11) you wrote as follows.

Asymptotic freedom is not the fact quark interactions get stronger as you move the quarks away from one another. That is, as it happens, related to confinement.

And my answer was as follows: “AlphaNumeric, can you see that you PROVED that you do not understand what the confinement means but the asymptotic freedom as well? Can you see how big liar you are? In the last your post you claim that confinement does not depend on distance whereas in the post #214 you claim that confinement depends on distance. Can you see that I am right claiming that you completely do not understand confinement and asymptotic freedom? Can you see that I am right claiming that I teach you what these terms mean, not you me?
Can you stop to write the nonsense?”

AlphaNumeric, I proved many times that you claimed both contrary and pro that confinement does not depend on distance.

In my last post I wrote as follows: “In many scientific papers and books you can read that the Quantum Theory of Fields is at least the incomplete theory so there appear the approximations, mathematical tricks and free parameters…..”. Your answer was as follows.

I never claimed its perfect.

Yes, you never claimed its perfect but in the last but one your post we can read as follows.

….I'm certain I know more about various issues within quantum field theory than you but I am also more familiar with the POWERFUL APPLICABILITY of it too.
Also, renormalisation is precisely about preventing the proliferation of parameters. If a model is non-renormalisable then it needs infinitely many parameters to fit data, while a renormalisable model requires only a finite amount.

So once more: The two assumptions in the Quantum Theory of Fields (the QTFs) that the bare fermions are the sizeless points and that the sizeless points can have mass, spin and sometimes charge, cause that QTFs is the incomplete theory, cause that there appear many wrong interpretations and cause that to fit the theoretical results to experimental data scientists apply many approximations, mathematical tricks and free parameters and it concerns the mainstream QED as well whereas it does not concern my QED because in my theory the internal structure of the bare fermions follows from the phase transitions of the modified Higgs field.

The QTFs, due to the TWO NONSENSICAL ASSUMPTIONS, never will lead to origin of the basic physical constants i.e. to origin of the gravitational constant, to origin of the Planck constant, speed of light, electric charge of electron and bare mass of electron. Moreover, the two nonsensical assumptions cause that within the QCD, since 1964, we cannot calculate the masses of the fundamental blocks of Nature i.e. of the protons and electrons. We can see that the basic unsolved problems in particle physics are outside the QTFs.

And you have effrontery to claim that QTFs have powerful applicability. Yes, it has ‘powerful applicability’ only due to the tremendous number of approximations, mathematical tricks and free parameters taken from ceiling. My theory is free from such incompetence because my theory is free from the two nonsensical assumptions.

Some scientists claim that we found the Higgs boson so we know everything. Only idiots can claim it. We still cannot within the mainstream theories solve the tremendous number of basic problems. I solved these problems because the Everlasting Theory is free from the two nonsensical assumptions. Due to the two nonsensical assumptions there is regress in THEORETICAL particle physics since 1948. Due to the two nonsensical assumptions new generations of physicists completely do not understand foundations of physics. It is the reason that AlphaNumeric still writes the nonsensical posts. The QTFs is such messy theory because its authors start from the two nonsensical assumptions and the string/M theory cannot change it because this theory starts from the other nonsensical assumptions. Of course, there are closed strings but their properties differ very much from the properties assumed in the mainstream string/M theory.

Recapitulation
The Quantum Theory of Fields and the string/M theory start from nonsensical assumptions. This causes that within these theories we cannot solve the basic problems i.e. these theories do not lead to the origin of the basic physical constants and to the masses of the fundamental blocks of Nature i.e. protons and neutrons. The nonsensical assumptions cause that new generations of physicists do not understand physics because there are the mathematical tricks that Nature cannot realize and there are the free parameters taken from ceiling.
The true physics you can find in my book and papers (see viXra).
 
Well, he is on viXra ;)
Which is like saying "I'm published on Geocities".

AlphaNumeric, I proved that you did not understand the difference between the asymptotic freedom and confinement. You claim as follows.

This means that I must once more cite your nonsensical ‘explanations’. In your post #214 (see page 11) you wrote as follows.

And my answer was as follows: “AlphaNumeric, can you see that you PROVED that you do not understand what the confinement means but the asymptotic freedom as well? Can you see how big liar you are? In the last your post you claim that confinement does not depend on distance whereas in the post #214 you claim that confinement depends on distance. Can you see that I am right claiming that you completely do not understand confinement and asymptotic freedom?
Thanks for giving another example of how you are dishonest. You quote me saying "Asymptotic freedom is not the fact quark interactions get stronger as you move the quarks away from one another. That is, as it happens, related to confinement.". There I am saying that asymptotic freedom is not related to how close particles are, that it in fact confinement which pertains to how close particles are. So despite you quoting me saying that you turn around and, 2 lines later, say "In the last your post you claim that confinement does not depend on distance ". How does the quote of me saying that confinement doesn't depend on distance when the comment is explicitly about saying it isn't asymptotic freedom which is distance dependent, it is confinement which pertains to distance between particles. Come on, this is really very basic. You're now showing you either cannot understand simple single line comments or that you're so desperate to lie that you'll misrepresent a quote immediately after posting said quote!

Asymptotic freedom is about how the coupling 'constant' between two particles varies with energy scale. Confinement is about how no charges are visible on a macroscopic scale and this leads to a flux tube forming whose properties are much like an elastic string joining the two particles. Of course I wouldn't be surprised in the least if you're also doing your usual thing of making up your own definitions and then declaring I don't understand asymptotic freedom and confinement because my explanations, those of the mainstream community, do not align with yours.

So, are you going to admit that my comment "Asymptotic freedom is not the fact quark interactions get stronger as you move the quarks away from one another. That is, as it happens, related to confinement." does not say "confinement does not depend on distance" and is in fact completely consistent with Post 214.

Can you see that I am right claiming that I teach you what these terms mean, not you me?
Can you stop to write the nonsense?”

AlphaNumeric, I proved many times that you claimed both contrary and pro that confinement does not depend on distance.
Well lets consider Post 214. I quote your definition of asymptotic freedom; "Asymptotic freedom : Scientists claim that in the strong field there is obligatory the stronger and stronger mutual attraction of the point quarks when they are moving away.". I then give you what scientists mean when they say 'asymptotic freedom', ie that it is about the energy scale of the particle interaction, not the distances between particles. Yes, the total force experienced includes a distance dependency but the coupling constant also varies, with asymptotic freedom meaning it goes to zero as the energy scale goes infinitely high.

Thanks for bringing up Post 214 because it is an example of where you don't know what scientists mean by 'asymptotic freedom', despite you phrasing it as "Scientists claim..." and of me then correcting that mistake in your understanding. Anyone who doesn't know which of us is right is welcome to check the Wikipedia page on asymptotic freedom, where it will explain that it is about how the coupling constant depends on energy scale and a system is asymptotically free if the coupling goes to zero as the energy scale is ever increased.

There, clear and undeniable example of how you didn't know what 'asymptotic freedom' meant and I had to tell you. Again. Even if you try to make up your own definition, as you tend to do, the fact you opened with the preface "Scientists claim..." means you're talking about what the mainstream says and speaking as someone in the mainstream community with a working understanding of asymptotic freedom (one of the first things I did when I started my PhD was the full one loop scalar QCD renormalisation flow for the coupling constant) I can attest to you being wrong, as Wikipedia will attest too.

Seeing as you have clearly been paying attention to what I say, enough to go back and refer to a post I made more than 6 months ago, you no longer have any excuse for claiming that you're the one with the correct definition and I'm the one mistaken; post 214 proves otherwise. Henceforth any further insinuation I'll consider trolling and act accordingly. As such I suggest you spend your time making other (nonsense) claims since this particular hobby horse of yours is done.

So once more: The two assumptions in the Quantum Theory of Fields (the QTFs) that the bare fermions are the sizeless points and that the sizeless points can have mass, spin and sometimes charge, cause that QTFs is the incomplete theory,
Spin as in a quantum number whose quantisation properties mirror those of classical angular rotation spin, they don't literally spin like a top. And it is not necessarily the case those two assumptions prevent a model being viable, it may be the case that there's a better particle physics model with those properties which doesn't have the issues QFT has.

that there appear many wrong interpretations and cause that to fit the theoretical results to experimental data scientists apply many approximations, mathematical tricks and free parameters and it concerns the mainstream QED as well whereas it does not concern my QED because in my theory the internal structure of the bare fermions follows from the phase transitions of the modified Higgs field.
Given you have no understanding of what the Higgs field or the Higgs mechanism actually involve your use of 'Higgs' in this context is unjustified, just as your claims you explain or incorporate the various string theories into your work is dishonest. You use mainstream terminology to label your work in an attempt to dress your nonsense up, at least to the casual reader.

The QTFs, due to the TWO NONSENSICAL ASSUMPTIONS, never will lead to origin of the basic physical constants i.e. to origin of the gravitational constant, to origin of the Planck constant, speed of light, electric charge of electron and bare mass of electron. Moreover, the two nonsensical assumptions cause that within the QCD, since 1964, we cannot calculate the masses of the fundamental blocks of Nature i.e. of the protons and electrons. We can see that the basic unsolved problems in particle physics are outside the QTFs.
They might well be but that doesn't mean we should just accept your work. Your work has fundamental problems of its own.

And you have effrontery to claim that QTFs have powerful applicability. Yes, it has ‘powerful applicability’ only due to the tremendous number of approximations, mathematical tricks and free parameters taken from ceiling.
That's like trying to criticise fluid mechanics because it makes the assumption fluids are continuous and ignores how they are really many many particles. Quantum field theory has real predictive power. Yes, you have to put in a small number of parameter values but then you can model and understand thousands of different atomic systems, with applications ranging from microchip design and medical scanners through to fusion reactors and drug design. Using simplifications and approximations where appropriate is not a problem and mathematical tricks are not tricks, they are methods. Doing a change of variable to solve an integral or construct an equation of motion isn't some dirty slight of hand, it is the application of a logical methodology to put a model to use.

You obviously have no idea how mathematical models and physical problems are combined for the purposes of practical use.

My theory is free from such incompetence because my theory is free from the two nonsensical assumptions.
And 'God did it' is free from anything but a single assumption. Doesn't make it valid science.

Some scientists claim that we found the Higgs boson so we know everything. Only idiots can claim it.
Only an idiot would try to tar an entire community for the comment of a tiny minority. Hell, I don't know if any scientist would say that. Please provide a reference of a reputable scientist saying such a thing.

You know full well that isn't the view of the majority of the mainstream community, as if you thought it was you'd have phrased what you said as "Scientists claim that...". Instead you put in "some" at the start, to act as a qualifier. Yes, some[/b] might say that but they are an extremely small minority, if they exist at all.

The true physics you can find in my book and papers (see viXra).
You skipped over the parts of my post where I highlighted how you are clearly not really trying to get your work into the community, that you post your nonsense on a forum because you know deep down you'll never manage to do anything more than perhaps con a few laypersons on a forum. What's the matter, unwilling or unable to give a good explanation of why you're not submitting your work to journals and instead spend your time posting here and just putting documents on vixra for no one to care about?

Your silence speaks volumes Sylwester. It says you know you've not got anything worthwhile.
 
AlphaNumeric, I have no time for nonsensical discussion. I will prove your incompetence once more.

…. I then give you what scientists mean when they say 'asymptotic freedom', ie that it is about the energy scale of the particle interaction, not the distances between particles.

Here

http://web.mit.edu/physics/people/faculty/docs/wilczek_nobel_lecture.pdf

is Chapter 2 titled
“Paradox Lost: Antiscreening, or Asymptotic Freedom”
On page 7 we can read as follows:
“…..According to this mechanism, the color charge of a quark, VIEWED UP CLOSE, IS SMALL. It builds up its power to drive the strong interactions by accumulating a growing cloud at LARGER DISTANCES.”

AlphaNumeric, now all can see that you try to be wiser than the Nobel Prize winner Frank Wilczek. Probably there is something wrong with you. You should read the original papers. Of course, Wikipedia is useful but it is first of all for laymen, not for PhD.

On the other hand, I must emphasize that whole Nature shows that the above mechanism is impossible. There is different explanation of the asymptotic freedom in my book and the paper titled “The Reformulated Asymptotic Freedom”. There is the NORMAL explanation that leads at very high energy (the transfer of energy should be at least 1 TeV) to the alpha_strong equal to 0.114 i.e. there appears the asymptote for 0.1139. Moreover, the confinement DOES NOT DEPEND ON DISTANCE.

Spin as in a quantum number whose quantisation properties mirror those of classical angular rotation spin, they don't literally spin like a top.

You wrote: “…they don’t literelly spin like a top.”
It is the next nonsense in the mainstream theories. My Everlasting Theory based on the phase transitions of the fundamental spacetime and constancy of the half-integral spin, shows that spin does literally spin like a top. There are the two formulae. For real particles is spin = mvr whereas for the virtual particles is obligatory the formula spin = energy*lifetime-of-a-loop.
Nature is NORMAL.
Nature is NORMAL.
Nature is NORMAL.
Only some ideas are nonsensical and follow from incompetence of the authors.

I perfectly understand what the Higgs mechanism is and I described the limitations of this mechanism (see my book and my papers, of course, on viXra). I described the NORMAL mechanism i.e. how in reality particles acquire their mass.

We must change the methods applied in the QTFs because such nonsensical methods never will lead to origin of the basic physical constants and masses of the fundamental blocks of Nature i.e. of electron, proton, neutron and neutrinos. Only my Lacking Part of Ultimate Theory leads to the FOUNDATIONS of physics and cosmology.

Only idiots can assume that there is an alternative way to solve the unsolved basic problems pointed above.
 
On page 7 we can read as follows:
“…..According to this mechanism, the color charge of a quark, VIEWED UP CLOSE, IS SMALL. It builds up its power to drive the strong interactions by accumulating a growing cloud at LARGER DISTANCES.”

Perhaps you should have read page 6 as well, you know, the bit where it says "In particular, energy is required to built up the thundercloud...".

Enough said !
 
Markus, you behave as a 5 years old child. You completely do not understand what you are reading. At first you should read the all previous posts.

So once more: Higher energy of collisions means smaller distances so we can say about energy or distance. On the other hand, AlphaNumeric claims that SCIENTISTS DO NOT WRITE ABOUT DISTANCE in relation to the asymptotic freedom. I proved that even the authors write about the distance AS WELL.
 
Back
Top