1.93 is not in either range.
Furthermore you're making an invalid assumption about what the ranges represent and how they are constructed. It is a standard practice in science for multiple experiments, often by different teams at different places using different methods, are done to try to measure a particular physical property. They will often not perfectly align, such as Experiment A giving a range of (for example) [0,0.75] for some physical property X and Experiment B giving a range [0.6,1.1]. Almost all such ranges are not hard and fast, they represent
confident intervals, ie the experiment is 95% or 99% or 99.99% (it will be stated in the published paper) confident that the true value lies somewhere in the stated range. The higher the confidence the wider the range. For multiple experiments the region of particular importance is the overlap region, ie the range of values which is within all of the confidence limits given. For example in the Experiment A and B I just said the range [0.6,0.75] is common to both experiments. In generality an experimental publication will give a distribution of likelihood for the value of X, akin to a probability distribution saying "We're Z percent confident X lies between a and b".
Now the issue of multiple experiments. Your conclusion about how the range is moving downwards somehow justifies your 'prediction' is flawed. The order is immaterial, the way of combining data from multiple experiments is not dependent upon the chronological order in which they were done. This procedure of combining information from multiple experiments is known as data fusion and is an important thing for experimental scientists to understand. To illustrate this consider the following picture :
That's some quantity within the Standard Model determined from experiments and modelling. Each row represents a different experiment used to measure the quantity, with the range for each experiment shown as the blue and red bars, with the red circles being the range mean. The rows are not in chronological order but even if you reshuffle them to be in chronological order the ranges shift left and right as you go from experiment to experiment. What matters is not the trend of this shifting but where they all overlap and how the total data collected combined into a more smaller range due to increased confidence (more experiments means more data means better statistics). Specifically notice how the first and fourth experiments, both from 2010, are the furthest left and right, respectively. Suppose we only considered those two, Experiment 1 and Experiment 4. If Experiment 1 was done before Experiment 4 then by your logic we'd expect future experiments to move further to the right of Experiment 4, into the $$m_{top} > 185 GeV$$ range, as that would be the 'trend' you claim is going on with pseudo-rapidity. Conversely if Experiment 4 were done before Experiment 1 then your logic says future experiments should be further to the left, into the $$m_{top} < 160 GeV$$ range. Neither of those things occurred, future experiments didn't continue the trend. Instead future experiments began zeroing in on the region of overlap between Experiments 1 and 4, with the error bars shrinking as time passed due to accumulation of data.
Such diagrams are common place in any experimental domain of science, particularly in collider physics. If you had ever had the intellectual honesty to go find out about experimental methodologies, particularly after my repeated explanation of why you need to deal with raw data and not data processed through the Standard Model, as $$\alpha_{strong}$$ is, then you'd have seen such things. You'd know how fluctuations in range, both in terms of the end points and their mean, are to be expected. Hell, if you knew even the most rudimentary pieces of statistics you'd know to expect this, it is taught in high school. I'll elaborate on that since you no doubt have no idea what I'm referring to.
Suppose you want to measure some physical quantity X, whose true value is $$X_{0}$$, but your measuring method is imperfect and suffers from noise which let's assume is Gaussian, $$\epsilon \sim \mathcal{N}(0,\sigma^{2})$$, ie you end up measuring $$X+\epsilon$$ for $$\epsilon$$ said random variable. You take a large number, N = PQ, measurements and split them into P lots of sets of size Q. We obviously expect the N samples to fluctuate about $$X = X_{0}$$ in a manner consistent with the $$\sigma^{2}$$ variance but we're only able to access the sample mean and standard deviation, nothing more. To now put this in context in regards to experimental ranges, overlapping and fluctuating we consider each of the P sets in turn. For set $$S_{i}$$ we compute the sample mean $$\langle S_{i} \rangle = s_{i}$$. It is a
basic result within statistics that the sample means $$s_{i}$$ themselves follow a Gaussian distribution with mean $$X_{0}$$.
Since you no doubt don't understand that either I'll explain it in words too. We have P different experimental teams, each of which makes Q measurements of X. Each then computes the mean value, $$s_{i}$$ and can give a range too, $$\sigma_{i}^{2}$$. For large P (ie so we can get good statistics) we expect there to be approximately as many $$s_{i} < X_{0}$$ as there are $$s_{i} > X_{0}$$, with the spread following said Gaussian. This means if we received each of the $$s_{i}$$ values, along with the sample variance to draw the confident interval bars, in turn they fluctuate about the true value $$X_{0}$$, sometimes greater than, sometimes less than $$X_{0}$$ and build up a Gaussian sample set.
This is an illustration of the flaw in your 'logic' that the fact the second experimental range is less than (but still overlapping) the first experimental range implies a slide down to your 1.93 value. In the example I just gave the Gaussian property makes it simple to do analytically, simple enough for children to learn it in school. Of course this doesn't mean the pseudo-rapidity measurements necessary follow Gaussian fluctuations but that isn't necessary for the result I just illustrated, thanks to the
central limit theorem.
Alternatively your flawed logic can be seen by experiment. Get 2 dice and roll them 100 times, recording their total, $$X_{1}+X_{2}$$. Then split the 100 results into 10 lots of 10. For each set of 10 work out the range, ie lowest to highest. Then draw a diagram like the one I just posted, with each of the 10 ranges. Then draw a vertical line at 7, since $$\langle X_{1} + X_{2} \rangle = 7$$ for two fair dice. You'll see some of the ranges are mostly to the left of 7 and others mostly to the right. Some might not even overlap but this is an artefact of poor statistics due to small sample size and the use of range rather than standard deviation (I'm not sure if you're able to compute that, given your general poor maths skills). The order in which you list the 10 sets is irrelevant too, which is unlike your implicit assumption about the role of chronological order of experiments in overall results.
So it would seem you have once again shown the major gaps in your understanding of even the most basic relevant subjects. You obviously haven't ever worked with experimental data in any practical way or even understood the methodology of statistical analysis, never mind the quantitative details in regards to distributions of sample means. It may well be the case that future experiments continue to move the range downwards, implying the two experimental values given thus far are actually statistical outliers, but this is not a certainty, particularly given the large number of individual measurements which go into each published range. Regardless, your
assertion that there is undoubtedly a downwards trend is false, you're making a carte blanche statement about statistics which just isn't valid.
One range is [2.02,2.32] and the other is [2.2,2.5]. Neither of those includes 1.93. If those intervals represent 95% confidence limits (which is generally the case in experimental physics) then 1.93 is
extremely unlikely to be consistent with the data.
I remembered what I wrote correctly and my point is valid. You call me dishonest yet you just said
"my result 1.93 overlaps with the new experimental data" when it doesn't, 1.93 is not in [2.02,2.32] nor [2.2,2.5]. Funny how you insinuate a neurological degenerative problem in me when you can say such easily exposed lies as that. As I said, I remembered what I said correctly, the point was correct and I've now elaborated in detail why your assertion there is necessarily a downward trend is flawed. All using statistics learn in high school, never mind some niche corner of graduate level mathematics.
I do like it when hacks who denounce the mainstream use mainstream physicists as quote sources to justify themselves when they need to. Always a bit of amusing hypocrisy. The mathematics of renormalisation has been put on much firmer rigorous ground since Feynmann died in 1988. And while it is not an all powerful perfect formalism it has clear practical utility; the most accurately tested model of science ever created by Man is quantum electrodynamics, which Feynmann himself helped develop.
Sylwester, remember who you're talking to. You might be able to con your friends and family into thinking you understand quantum field theory or have experience with its inner workings but we both know you don't. Some of us can actually do some calculations with QFT. I'm certain I know more about various issues within quantum field theory than you but I am also more familiar with the powerful applicability of it too.
Also, renormalisation is precisely about
preventing the proliferation of parameters. If a model is non-renormalisable then it needs infinitely many parameters to fit data, while a renormalisable model requires only a finite amount.
Ah, so it is my fault you cannot even present your work properly. Quite.
As for your 'paper' it isn't a 'paper' in the proper peer reviewed and published sense, it's a document you just put on a hosting website. Still can't get a journal to publish your work eh? Besides, I have plenty of other papers I have to read, some of us need to be able to produce results, not rhetoric on a forum.
Where did I say that? I didn't. I said it is dishonest of you to assert that particular experimental results demonstrate superluminal neutrinos when alternative explanations are still on the table. If both model A and model B explain experiment X then it is dishonest to say "Experiment X shows model A is true". No, that would only be the case if all alternatives were excluded. Explanations for the 1987 supernova which do not require superluminal neutrinos exist and so the event cannot act as confirmation of your claims, only exclude
some others that are inconsistent with the event. Obviously this subtle issue is something you struggle to grasp.
So why haven't you gotten it published yet? Are you not sending it to journals? Why are you still on this forum spewing out rhetoric?
It suggests your reading skills are rather poor. You obviously have no problem insinuating Alzheimers or schizophrenia in people, specifically me, which speaks volumes about the kind of person you are. Well done, you've spent 3 decades touting your nonsense and it has lead to you being stuck in the pseudo-science section of a forum insinuating mental health problems in others. Perhaps you're just trying to be provocative so I'll reply. After all, if I didn't reply to you you'd be
completely ignored by everyone. My my, haven't those 30 years been fruitful.