Does the Speed of Light Change?

Most recent answer: 8/26/2013

Q:
Hi, I've read almost all these questions and responses, so I hope you don't try to repeat any of those... I've learned that before the 1900's they actually measured at different times with different experiments the speed of light, and all those measurements show different speeds. It wasn't until the beginning of the 1900's they re-defined the "speed of light" and the measurements units were coupled to the speed of light. I ask you in that context; how could any measurements in any experiment afterwards, and dare I say, until this day, actually disprove that the speed of light is changing? It might still change, and we could not see it, because the units to measure the changes , change with it (because they are linked). I mean, I talked to a lot of scientists and all assume it's constant because that's what they are learned. Nobody every gave it a second thought either. On your page at least, I've found some references to experiments, that would actually proof it's constant, then again, if after that one (I don't care even if it's a dozen experiments) experiment, nobody dares to challenge it again ... it is the same with energy and matter, the assumption that the amounts don't change. Researching into it, I haven't found any good reference to any experiment that must prove it. In textbooks you can read it also actually is an assumption (based on that god created the atom and could not be split (atom, greeks, etc)). And so what if the universe actually grew? Larger and larger (like an organism), and energy levels and matter varies over time? It wouldn't need any dark matter or dark energy at all to explain expansion, let alone accelerating expansion... I'm happy to discuss that subject one day... I would to conclude be happy to have in your reply some links and references to actual experiments being carried out and their results that prove that light speed is a constant (not using units of measurements linked to the speed of light of course). You have referenced "experiments" in general, so I'm just curious. I do believe in science, that is also constantly "doubting" itself, without that doubt, how can any experiment lead to a definite answer? For your reference : "Increased accuracy of c and redefinition of the metre[edit source] See also: History of the metre In the second half of the 20th century much progress was made in increasing the accuracy of measurements of the speed of light, first by cavity resonance techniques and later by laser interferometer techniques. In 1972, using the latter method and the 1960 definition of the metre in terms of a particular spectral line of krypton-86, a group at NBS in Boulder, Colorado determined the speed of light in vacuum to be c = 299,792,456.2±1.1 m/s. This was 100 times less uncertain than the previously accepted value. The remaining uncertainty was mainly related to the definition of the metre.[Note 8][105] As similar experiments found comparable results for c, the 15th Conférence Générale des Poids et Mesures (CGPM) in 1975 recommended using the value 299,792,458 m/s for the speed of light.[135] In 1983 the 17th CGPM redefined the metre thus, "The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second."[81] As a result of this definition, the value of the speed of light in vacuum is exactly 299,792,458 m/s[136][137] and has become a defined constant in the SI system of units.[11] Improved experimental techniques do not affect the value of the speed of light in SI units, but instead allow a more precise realization of the metre" So, I ask you again, if the speed were changing, and the unit we use to measure it, is the metre; based on above "assumption", we could not ever notice it, because "the metre is is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second"... Huh? Isn't that circular reasoning?
- Wouter Vanbelleghem (age 36)
Antwerp, Belgium
A:

Your question gets at a key issue. The only quantities to which we can assign non-arbitrary values are dimensionless ones, such as the ratio of a proton mass to an electron mass. Any quantity with units can acquire a different value just by redefining the units. So the real question becomes whether various dimensionless quantities that involve c have changed over time. Perhaps the most familiar such quantity is the fine structure constant, 2πe2/hc. So far as anybody can tell from looking at the spectral lines from ancient galaxies, it hasn't changed.

Of course, you might argue that just means h and e have changed in a way to compensate for the change in c. Unless some other dimensionless quantity has been found to change, adding such hypotheses just makes things complicated without getting anywhere. If it does turn out that some fundamental dimensionless quantity has changed, then attributing the change to change in c may be one option.

Mike W.

p.s. I think the discussion earlier in the thread was more about whether c is constant as viewed in different frames, not as viewed over time.


(published on 08/21/2013)

Follow-Up #1: Young-Earth creation?

Q:
Science is not about getting new ideas only, it is about changing many false ideas, in which we believe. In 1986 Settefield said in a research that the speed of light “C” was never constant, and that it has been quantimly decreasing since the very start of the universe: http://www.setterfield.org/report/report.html A similar and more accurate study in 1993 by Dolphin and Montgomry showed the same outcomes. This Theory was proven by many experiments, one of them was a study on granite rocks in Australia and the USA.In this study, and by Depending on the decay rates we know, the alpha decay experiments on these rocks indicated that these rocks are 2 million years old. However, the geological researches showed that they can not be more than few thousands years old. These outcomes support the theory that C as well as decay rates, are not constants. This experiment is explained in the following link: http://www.icr.org/article/young-helium-diffusion-age-zircons/ This theory also gives explanations to many issues, like the existence of high red shifts values. There are many stupid ideas for explaining the existence of such high values, like this that the universe is expanding faster than the speed of light: http://www.youtube.com/watch?v=myjaVI7_6Is This phenomena is simply explained under the terms of C decay theory. C decay theory gives accurate explanations to many problems regarding the big bang like the flatness and the horizon issues. Moreover, this theory does not contradict with Einstein's and Lorentz's work, as Einstein’s second theory is not actually necessary for the derivation of Lorentz transformations. In the time being there are many other ways to derive the Lorentz transformations.No one, as far as I know, has actually prove this theory wrong, or gave another logical explanations to the items mentioned above. I do find it very weird that no one ever heard of this theory, neither was it mentioned as discussion issue. Why is that? how could this theory be wrong? I think that the ignorance of this theory is because of Setterfield’s religious background, but, talking science, I do not feel that there is anything wrong with it. In the terms of the invalidity of this theory, how could we possibly explain the existence of high red shifts values, and the outcomes of the granite rocks experiments?"
- Yahya (age 19)
Damascus, Syria
A:

It's a bit difficult to extract from your note any specific theory to which you might be referring. The most dramatic point you raise is the claim that the earth is only a few thousand years old, and that all the radioisotope dating methods etc. are completely wrong. The paper you cite in defense of that says that more helium is left in some zircon rocks than would be if the helium had been made over geological times via radioactive decays. The argument involves some very tricky claims about why helium should diffuse out much faster than you'd estimate by extrapolating from standard measurements at higher temperatures. This tricky argument has several serious flaws, including failure to consider the strong effects of pressure on diffusion rates. See
 .

The other paper you cite attempts to estimate a change in the speed of light by looking at tiny differences found in results from various measurements by a variety of techniques over the last few hundred years. That's a very shaky way to approach a fundamental question. Precision modern measurements of possible changes in fundamental constants, reflected in atomic spectra, haven't turned up anything.

I'm very puzzled that in the last week or so we've gotten a batch of questions trying to attack relativity from all sorts of different directions, typically based on error bars in antique experiments. Is there some sort of international anti-relativity week?

Mike W.


(published on 08/22/2013)

Follow-Up #2: Is the speed of light changing?

Q:
Thank you for your answer. There are, however, some points you misunderstood and I will try to clarify them: 1- I am not actually saying that the earth is few thousands years old, I am not referring to this issue. However, I know that this is one of the claims or possible outcomes of the C decay theory. What I really want to discuss is the theory in general. I did not say that the radioisotopes dating methods are wrong. The main problem is that the decay rates are considered constants, whereas the studies show that they have been decreasing as they are related to C. About the effects of pressure on the diffusion rates I will study this further. 2- Regarding the main topic, the C measurements are not all dated back to many years ago when measuring the speed of light was high science, there are many new measurements that prove this decay. It is also very right that precision modern measurements of possible changes in fundamental constants haven't turned up anything but that is only because they were obtained using atomic clocks as time standards. Thomas Van Flandern of the National Bureau of Standards noticed a slight deviation of the orbital period of the moon between 1955 and 1981 as measured by atomic clocks. He concluded "...if this result has any generality to it, this means that atomic phenomena are slowing down with respect to dynamic phenomena...though we cannot tell whether the changes are occurring at the atomic or dynamic level". This slow down of the atomic phenomena with respect to the dynamic does support the c decay theory, and explains why we do not feel the decrease of C when obtained using atomic clocks as a time standard. The atomic clock's time would change uniformly with a change in c. 3- As I said before, this theory gives many explanations to many fundamental issues including the high red shift values, and as you did not mention this point in your reply I think I have to explain the contradiction we are led to regarding the high red shift values under the terms of the invalidity of C decay theory and to reshape my first question. As far as I know, the universe's expansion is determined by the Hubble constant, which is approximately equal to 71, measured in the technically useful but conceptually confusing units of "kilometers per second per megaparsec." and when we refer to the distance between two galaxies, we are referring to the distance between them right now -- that is, the distance we would measure if we somehow "pressed the freeze-frame button" on the universe. Hubble found out that there is a linear relationship between speed and distance, and formulated this in the following equation : Speed = Hubble constant * distance According to this equation and to the high red shift values we obtained from observations that refer to values with Z=8, we could conclude by doing simple calculations that the universe is expanding faster than the speed of light if we are not to consider the theory of C decay as valid. Thus, taking into consideration the invalidity of C decay theory we are led to a great contradiction and a very stupid result. And that is what I wanted to say from the first beginning. Reshaping my previous question: Is there any logical solution to this problem that does not lead us to such a stupid result, neither depend on the validity of C decay theory? http://curious.astro.cornell.edu/question.php?number=575 4- I am not attacking the relativity at all. In the fact I did say in my first question that C decay does not contradict with the Relativity as there many other ways to derive the Lorentz Transformations nowadays. But even if I am attacking the Relativity, that shall not be a problem, there is nothing complete or absolute right and every single one of us is supposed to doubt in order to believe. In final words I just want to explain that I am not supporting the C decay theory, I am not even a creationist, but I do feel that this theory explains many things and that it gives many logical results in many aspects.
- Yahya (age 19)
Damascus, Syria
A:

Thanks for these clarifications. Here's some thoughts in response.

1. Real theories are not so adjustable as that. One can't say that a big piece of the evidence for the theory is that decay rates have changed enormously on a very short time scale (6000 years) and that on the contrary the theory predicts subtle changes over long time spans. Real theories have to make consistent predictions for a wide range of phenomena.

2. I tried to find some of van Flandern's work. Before I could get to anything about the Moon, I found a paper by him trying to describe and critique relativity. It was confused. For example, it contains this line "General Relativity (GR) predicts that clocks in a stronger gravitational field will tick at a slower rate. " That's  false. It's the gravitational potential, not the field, that determines that rate in a standard choice of coordinates. In other respects, it's simply out of date, since some of the most dramatic confirmations of GR (e.g. frame-dragging as measured by Gravity Probe B) are more recent.

3. Standard cosmology has no trouble accounting for large redshifts, in a theory that makes many other remarkably accurate predictions, e.g. for the details of ripples in the microwave background. Why should we be looking for an alternative theory whose predictions are unclear even on whether the universe is 6000 years old? The various phrases you have cut and paste from the Cornell site about Hubble etc. are not entirely clear. For example, their "freeze frame" distance is ill-defined unless you assume the existence of absolute simultaneity. There is a well-defined  version of what they're after, but that's not it.

The Hubble equation does not apply, either in GR theory or in practice, to large redshifts. At any rate, there is no particular reason to be alarmed that some things that once were within sight are "now" outside our horizon. Why is that "stupid"? 

4. Your point about maintaining doubt is quite important. Most physicists expect that GR will break down in some domain. We'd just be surprised if that breakdown occurred right in the middle of the range of phenomena for which it's provided such spectacularly accurate predictions for precise modern measurements.

Mike W.


(published on 08/26/2013)