The Personal Website of Mark W. DawsonContaining His
Articles, Observations, Thoughts, Meanderings,
|
Quantum Entanglement is a physical phenomenon which occurs when pairs or groups of particles are generated, interact, or share spatial proximity in ways such that the quantum state of each particle cannot be described independently of the state of the other(s), even when the particles are separated by a large distance. Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as is to be expected due to their entanglement. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a property of a particle performs an irreversible collapse on that particle and will change the original quantum state. In the case of entangled particles, such a measurement will be on the entangled system as a whole. Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realist view of causality (Einstein referring to it as "spooky action at a distance") and argued that the accepted formulation of quantum mechanics must, therefore, be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified experimentally in tests where the polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. In earlier tests it couldn't be absolutely ruled out that the test result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location. However so-called "loophole-free" Bell tests have been performed in which the locations were separated such that communications at the speed of light would have taken longer—in one case 10,000 times longer—than the interval between the measurements. According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible. Quantum entanglement has been demonstrated experimentally with photons, neutrinos, electrons, molecules as large as buckyballs, and even small diamonds. The utilization of entanglement in communication and computation is a very active area of research. |
The Speed of Light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its exact value is 299,792,458 metres per second (approximately 300,000 km/s (186,000 mi/s). It is exact because by international agreement a metre is defined to be the length of the path travelled by light in vacuum during a time interval of 1/299792458 second. According to special relativity, c is the maximum speed at which all conventional matter and hence all known forms of information in the universe can travel. Though this speed is most commonly associated with light, it is in fact the speed at which all massless particles and changes of the associated fields travel in vacuum (including electromagnetic radiation and gravitational waves). Such particles and waves travel at c regardless of the motion of the source or the inertial reference frame of the observer. In the special and general theories of relativity, c interrelates space and time, and also appears in the famous equation of mass–energy equivalence E = mc2. The speed at which light propagates through transparent materials, such as glass or air, is less than c; similarly, the speed of electromagnetic waves in wire cables is slower than c. The ratio between c and the speed v at which light travels in a material is called the refractive index n of the material (n = c / v). For example, for visible light the refractive index of glass is typically around 1.5, meaning that light in glass travels at c / 1.5 ˜ 200,000 km/s (124,000 mi/s); the refractive index of air for visible light is about 1.0003, so the speed of light in air is about 299,700 km/s (186,220 mi/s), which is about 90 km/s (56 mi/s) slower than c. For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, or vice versa. The light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light also limits the theoretical maximum speed of computers, since information must be sent within the computer from chip to chip. The speed of light can be used with time of flight measurements to measure large distances to high precision. Ole Rømer first demonstrated in 1676 that light travels at a finite speed (as opposed to instantaneously) by studying the apparent motion of Jupiter's moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, and therefore travelled at the speed c appearing in his theory of electromagnetism. In 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s (983571056 ft/s; 186282.397 mi/s) with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units (SI) as the distance travelled by light in vacuum in 1/299792458 of a second. |
Quantum Entanglement occurs instantaneously no matter what the distance between the two particles is between each other. The Speed of Light limitation would require that it should take some time for the change state to be transmitted from one particle to another. As can be seen, this is a conflict between these two ideas. However, both have been “proven” to be true. Over a century and tens of thousands of observations and experiments have shown that the Speed of Light is real. In the past few decades, and after dozens of experiments, have also shown that Quantum Entanglement is real.
There cannot be two different realities in the Universe and this conflict has perplexed scientists. There is much Scientific Speculation as to how to resolve this conflict, but as of yet, there is no actual science that resolves this conflict. The resolution of this conflict will have immense impacts as to understanding the workings of our universe and a better understanding of “Reality”. Therefore, this conflict needs to be resolved for physics to advance.
The arrow of time refers to the question of what is the meaning of time, why and how time flows, and what is the physical nature of time? As these questions are somewhat tortuous, I have written a brief article “The Arrow of Time” which examines this issue. I believe that it is important to resolve this question as it is fundamental to the nature of reality as outlined in another article of mine “What is Reality?”.
Modern Physics has three outstanding problems that it is grappling with as follows.
The first is the problem of Gravity. Gravity is a universal force, but the “Standard Model” of particle physics has no explanation for gravity, and gravity cannot incorporate the “Standard Model”. Until Gravity and the “Standard Model” can be incorporated (through a “Grand Unified Theory” – GUT) it is not possible to have a full understanding of how and why the Universe works.
The second is the problem of Dark Matter. In the early 1990s Astrophysicists verified the existence of Dark Matter, matter that exists in the universe, but we cannot see. This was done by utilizing space telescopes to take a census of the stars and their star type in a galaxy to determine the approximate mass of the galaxy, then measuring the motion of selected stars through the galaxy, then feeding this information into a supercomputer that utilized Einstein’s General Relativity equations to produce a gravitational model of the galaxy. To their surprise, the model said that the Galaxy could not exist because there was insufficient mass to hold it together. They did this for several galaxies, then dozens of galaxies, and every time the computer model said that the Galaxy could not exist. They adjusted the amount of mass in a galaxy in such a manner as to get the result that agreed with what they were observing in galaxies. In every case the adjustment was the same – the amount of normal matter (baryonic matter) was 20% of what was needed while 80% of the matter was unseen – which they named “Dark Matter”. The astrophysicists went to the quantum physicists to ask what this Dark Matter could be. The quantum physicists had no answer. Yet everyone agrees that Dark Matter exists, and until the “Standard Model” can incorporate Dark Matter it will be incomplete.
The third is the problem of Dark Energy. In the late 1990s Astrophysicists realized it would be possible to measure the rate of expansion of the universe utilizing space telescopes and supercomputers (again utilizing Einstein’s General Relativity equations). At that time they had three scenarios as to the ultimate fate of the universe; a closed universe, an open universe, or a flat universe. A closed universe is one in which the mass of the universe was greater than the force of expansion, and the universe would collapse onto itself to create a new universe (the expansion of space, a stop, and then the contraction of space). An open universe is one that the expansion is greater than the mass and the universe will expand forever and eventually suffer total radioactive decay and cease to exist. A flat universe is one in which the mass and the expansion are equal, and the universe would just stop and be fixed in size (nobody expected this result, but it was possible mathematically). Everybody expected that the rate of expansion was slowing, and we would end up in either an open or closed universe. To their surprise, the results showed that the rate of expansion was increasing. The only way this would be possible if there were a repulsive energy force that was greater than the gravitational force. They named this energy “Dark Energy”. The astrophysicists went to the quantum physicists to ask what this Dark Energy could be. The quantum physicists had no answer. Yet everyone agrees that Dark Energy exists, and until the “Standard Model” can incorporate Dark Energy it will be incomplete.
Please Note - many academics, scientist and engineers would critique what I have written here as not accurate nor through. I freely acknowledge that these critiques are correct. It was not my intentions to be accurate or through, as I am not qualified to give an accurate nor through description. My intention was to be understandable to a layperson so that they can grasp the concepts. Academics, scientists, and engineers entire education and training is based on accuracy and thoroughness, and as such, they strive for this accuracy and thoroughness. I believe it is essential for all laypersons to grasp the concepts of this paper, so they make more informed decisions on those areas of human endeavors that deal with this subject. As such, I did not strive for accuracy and thoroughness, only understandability.
Most academics, scientist, and engineers when speaking or writing for the general public (and many science writers as well) strive to be understandable to the general public. However, they often fall short on the understandability because of their commitment to accuracy and thoroughness, as well as some audience awareness factors. Their two biggest problems are accuracy and the audience knowledge of the topic.
Accuracy is a problem because academics, scientist, engineers and science writers are loath to be inaccurate. This is because they want the audience to obtain the correct information, and the possible negative repercussions amongst their colleagues and the scientific community at large if they are inaccurate. However, because modern science is complex this accuracy can, and often, leads to confusion amongst the audience.
The audience knowledge of the topic is important as most modern science is complex, with its own words, terminology, and basic concepts the audience is unfamiliar with, or they misinterpret. The audience becomes confused (even while smiling and lauding the academics, scientists, engineers or science writer), and the audience does not achieve understandability. Many times, the academics, scientists, engineers or science writer utilizes the scientific disciplines own words, terminology, and basic concepts without realizing the audience misinterpretations, or has no comprehension of these items.
It is for this reason that I place understandability as the highest priority in my writing, and I am willing to sacrifice accuracy and thoroughness to achieve understandability. There are many books, websites, and videos available that are more accurate and through. The subchapter on “Further Readings” also contains books on various subjects that can provide more accurate and thorough information. I leave it to the reader to decide if they want more accurate or through information and to seek out these books, websites, and videos for this information.