Randomness!

pharaohmoan

The illusion is you, let go.
Registered Senior Member
According to wikipedia Randomness is a lack of order, purpose, cause, or predictability in non-scientific parlance. A random process is a repeating process whose outcomes follow no describable deterministic pattern

But is randomness really random? Doesn't a random selection over a large group of numbers follow a pattern in order to appear random.
Is their a law of randomness?

Try this for yourself enter =rand() into the first cell on excel (column A). next to it (column b) multiply that cell by 100. Then copy formulas down to cell 1000. You will notice that the number in column b are from 1 to 100. Then average out column b with =SUM(B1:B1000)/1000
If their was a law to randomness you would expect to see an average of column b of about 50. Well I tried this experiment out several times and got an average between 49.5 and 50.5 clearly this is deterministic.

Does this not mean that randomness isn't random at all by follows some underlying law we don't know about? :eek:
 
Does this not mean that randomness isn't random at all by follows some underlying law we don't know about

This is very perplexing because I have very random thoughts about it all! ;)
 
Is there an average to your random thoughts? :rolleyes:

I'm quite imbalanced as you know so average isn't a thing that I really understand, I strive for either below averge or above for I seem to be at a random point in living which makes my life in a total flux.;)
 
According to wikipedia Randomness is a lack of order, purpose, cause, or predictability in non-scientific parlance. A random process is a repeating process whose outcomes follow no describable deterministic pattern

But is randomness really random? Doesn't a random selection over a large group of numbers follow a pattern in order to appear random.
Is their a law of randomness?

Try this for yourself enter =rand() into the first cell on excel (column A). next to it (column b) multiply that cell by 100. Then copy formulas down to cell 1000. You will notice that the number in column b are from 1 to 100. Then average out column b with =SUM(B1:B1000)/1000
If their was a law to randomness you would expect to see an average of column b of about 50. Well I tried this experiment out several times and got an average between 49.5 and 50.5 clearly this is deterministic.

Does this not mean that randomness isn't random at all by follows some underlying law we don't know about? :eek:
True randomness must be natural, as any formula would not really be random even if it appear to be, most computers use the timer (the number of milliseconds the computer has been on) to get a non-redundant number (there is a formula to make it appear random).

I think there are computers that will use other factors too, that are more natural (like the temperature of the processor) or the white noise from the microphone or whatever, scientifically to achieve random may be to measure the radiation from some atoms that are unstable.

There is of course alot of discussions going on whether or not anything really is random even naturally - or if there is underlying principles and hidden variables that we just haven't found yet.

Also, systems quite fast become chaotic if left for itself, especially fluid systems or gas (in where the molecules are so much more movable and easier to disturb) even a small change in a system gives rise to unpredictable results, which is really not random, but just due to the complexity of the system, if there is something random in that system we would not be able to discern it, I guess.
 
If their was a law to randomness you would expect to see an average of column b of about 50. Well I tried this experiment out several times and got an average between 49.5 and 50.5 clearly this is deterministic.
I think you're using the concept of determinism wrongly, here. Determinism would imply that you can predict exactly what the average would be - and you can't. An outcome can be random, even if you have a good expectation of what it will be, i.e. if you know it's probability distribution.
Does this not mean that randomness isn't random at all by follows some underlying law we don't know about? :eek:
No, that's quite expected. Random varibles can certainly have known probability distributions: In this case you're finding the average of a number of uniformly distributed random variables, which gives a non-uniform distribution with values around 0.5 (or 50 in your column b) as far more likely than values around 0 or 1.

Try looking up some basic probability theory. Btw, your observation is actually quite sound: The "randomness" of the individual variable isn't conserved when you combine several of them in this way, which is one of the common head-wringers in probability theory...

Here's an example of why the distribution isn't conserved: Let's look at two uniformly distributed random variables between 0 and 1, called $$X$$ and $$Y$$. Their probability distributions are given as

$$
P(X \leq x)=x \qquad x \in [ 0,1 ]
$$

i.e. the likelihood that random variable $$X$$ is less than some number $$x$$ in the interval $$[0,1]$$ is $$x$$. For instance, the likelihood that $$X$$ is a number below 0.5 is exactly 0.5. Analogously for $$Y$$.

Now, what is the likelihood that both $$X$$ and $$Y$$ are below 0.5? Well, since $$X$$ and $$Y$$ are independent, this is the product of the individual probabilities:

$$
P(X \leq 0.5 \cap Y \leq 0.5) = P(X \leq 0.5) \cdot P(Y \leq 0.5) = 0.5^2 = 0.25
$$

The distribution of the average is a bit more complex, and easier to explain with finite distributions. Think of dice rolls: If you roll two dice, what is the most likely sum?
 
Last edited:
True randomness must be natural, as any formula would not really be random even if it appear to be, most computers use the timer (the number of milliseconds the computer has been on) to get a non-redundant number (there is a formula to make it appear random).

I think there are computers that will use other factors too, that are more natural (like the temperature of the processor) or the white noise from the microphone or whatever, scientifically to achieve random may be to measure the radiation from some atoms that are unstable.

There is of course alot of discussions going on whether or not anything really is random even naturally - or if there is underlying principles and hidden variables that we just haven't found yet.

Also, systems quite fast become chaotic if left for itself, especially fluid systems or gas (in where the molecules are so much more movable and easier to disturb) even a small change in a system gives rise to unpredictable results, which is really not random, but just due to the complexity of the system, if there is something random in that system we would not be able to discern it, I guess.

I have noticed that many people confuse randomness with unpredictability. A system may be determined but appear unpredictable because we do not know the starting conditions.
 
I have noticed that many people confuse randomness with unpredictability. A system may be determined but appear unpredictable because we do not know the starting conditions.

Good point. Had a question. If there was a seemingly random pattern. Would we be able to tell it was random before it was over? In other words, couldn't the pattern be revealed by the next step(s) in the sequence?
 
pi is random, but it contains order. sometimes it just starts repeating one number several times. same with reality. it's mostly random, but sometimes there is a planet, sun, patterns. we can't interpret randomness, so we see just space, nothing.
 
Cyperium is right. No computer can truly create random numbers. All random number generators use a formula and a seed to create the output. In the case of Excel, the formula is:

random_number=fractional part of (9821 * r + 0.211327),
where r = is set by the system clock

and if you want successive number, r is the set by the pervious number.

All this is in the MS support site here:
support.microsoft.com/kb/86523
 
Cyperium is right. No computer can truly create random numbers. All random number generators use a formula and a seed to create the output. In the case of Excel, the formula is:

random_number=fractional part of (9821 * r + 0.211327),
where r = is set by the system clock

and if you want successive number, r is the set by the pervious number.

All this is in the MS support site here:
support.microsoft.com/kb/86523

Heh... All I had to do was read the name to know who this was...
 
myles said:
I have noticed that many people confuse randomness with unpredictability. A system may be determined but appear unpredictable because we do not know the starting conditions.
The situation is worse than that: even if the starting conditions are known perfectly, and the system is completely deterministic and based on just a couple of simple rules for changing from one state to another, there may be no way to predict future states without running the whole system to them.

Steve Wolfram (Mathematica) goes into this in detail, with cellular automata, but it has been known since Turing's formulation of the halting problem.

The combination of that, the unsolvability of high order equations, the existence of transcendental numbers, chaos indeterminacy (which prevents prediction by amplifying the unmeasurable), and so forth, destroys the normal worldview of "determined" vs "random". If mathematics describes any reality in the world, neither of these concepts nor the combination of both describes very much of it, fundamentally.
 
Good point. Had a question. If there was a seemingly random pattern. Would we be able to tell it was random before it was over? In other words, couldn't the pattern be revealed by the next step(s) in the sequence?

I don't see how this could happen because the results of "next step" would bear no obvious relationship to the previous step.If we inferred a relationship, the next step would prove us wrong.

A seemingly random system is a different matter. We would only know it was not truly random if we could see the whole series in which a pattern would be identifiable. It might be better to think of a complex determined system
 
The situation is worse than that: even if the starting conditions are known perfectly, and the system is completely deterministic and based on just a couple of simple rules for changing from one state to another, there may be no way to predict future states without running the whole system to them.

Steve Wolfram (Mathematica) goes into this in detail, with cellular automata, but it has been known since Turing's formulation of the halting problem.

The combination of that, the unsolvability of high order equations, the existence of transcendental numbers, chaos indeterminacy (which prevents prediction by amplifying the unmeasurable), and so forth, destroys the normal worldview of "determined" vs "random". If mathematics describes any reality in the world, neither of these concepts nor the combination of both describes very much of it, fundamentally.


Thanks for that. I vaguely remember reading something of the kind many years ago.
 
pharaohmoan:

Your list of 1000 numbers between 1 and 100 might well satisfy certain tests for randomness (ignoring the fact that your computer really only produces what are known as pseudorandom numbers). But that certainly does NOT mean that a calculated average of such numbers will be random.

What you have done is to specify a distribution of numbers at the start. You specified that numbers generated MUST lie between 1 and 100. If you wanted a truly random average, you would need to use an unlimited range of "random" numbers.

In effect, by restricting the spread of numbers you generated, you also placed constraints on the average. The spread in the average you obtain when you run your experiment over and over again is determined by the number of data points in each run. In your case, by choosing to generate 1000 numbers each time, you have restricted the average to be within perhaps a few tenths of 50. More numbers would give you an average even closer to 50. Fewer would give you wider variation. (For example, try your experiment generating only 10 numbers at a time, or 5, or 1.)

In summary, you imposed some randomness and concentrated on that, while ignoring all the non-randomness inherent in what you did. So, you shouldn't be surprised at the non-random average.
 
pharaohmoan:

Your list of 1000 numbers between 1 and 100 might well satisfy certain tests for randomness (ignoring the fact that your computer really only produces what are known as pseudorandom numbers). But that certainly does NOT mean that a calculated average of such numbers will be random.

What you have done is to specify a distribution of numbers at the start. You specified that numbers generated MUST lie between 1 and 100. If you wanted a truly random average, you would need to use an unlimited range of "random" numbers.

In effect, by restricting the spread of numbers you generated, you also placed constraints on the average. The spread in the average you obtain when you run your experiment over and over again is determined by the number of data points in each run. In your case, by choosing to generate 1000 numbers each time, you have restricted the average to be within perhaps a few tenths of 50. More numbers would give you an average even closer to 50. Fewer would give you wider variation. (For example, try your experiment generating only 10 numbers at a time, or 5, or 1.)

In summary, you imposed some randomness and concentrated on that, while ignoring all the non-randomness inherent in what you did. So, you shouldn't be surprised at the non-random average.

Thanks James and the other posts. This topic is much clearer to me now. I must admit there is quite a gathering of clever bods on this forum. By my thousandth post I hope to be able to build a small nuclear reactor, and debunk the theory of relativity.;)
 
pharaohmoan:

Your list of 1000 numbers between 1 and 100 might well satisfy certain tests for randomness (ignoring the fact that your computer really only produces what are known as pseudorandom numbers). But that certainly does NOT mean that a calculated average of such numbers will be random.
Yes, it does. However, it doesn't mean that the distribution will be the same as for the individual numbers, and certainly not that the distribution will be uniform.

It appears that people are quite confused about what "random" actually means, and default to (implicitly) believing that it means "random variable with uniform distribution" (a probability theoretical term). I don't think this is a very useful definition.

Again, think of the sum of a pair of dice. The probability distribution is non-uniform on the integers from 2 to 12. Does the non-uniformity then mean that the sum is not random?

Another example: We can predict the average mean time between radioactive decays for some sample, but the actual time between decays is effectively random.
 
Back
Top