Entropy and Statistical Mechanics Online Part 完整版真题免费下载
The subject matter of this document is statistical mechanics, or the study of how macroscopic results manifest from microscopic interactions in systems with many interacting parts.
Our goal is to provide a uniﬁed introduction to statistical mechanics using the concept of entropy. A precise deﬁnition of entropy pervades statistical mechanics and other scientiﬁc subjects and is useful in its own right. While many students may have heard the word entropy before, entropy is rarely explained in its full detail or with rigorous mathematics, leaving students confused about many of its implications. Moreover, when students learn about thermodynamic laws, laws that describe the macroscopic results of statistical mechanics, like “change in internal energy = heat ﬂow in + work done on a system”, the concepts of internal energy, heat, and work are all left at the mercy of a student’s vague, intuitive understanding.
In this document, we will see that formulating statistical mechanics with a focus on entropy can provide a more uniﬁed and symmetric understanding of many of the laws of thermodynamics. Indeed some laws of thermodynamics which appear confusing and potentially unrelated at ﬁrst glance can in fact all be seen to follow from the same treatment of entropy in statistical mechanics.
Entropy as information
At this point in your life, you may have heard the word entropy, but chances are, it was given a vague, non-committal deﬁnition. The goal of this section is to introduce a more explicit concept of entropy from an abstract standpoint before considering its experimental and observational signatures.
1.1 Quantifying the amount of information in the answer to a question
Entropy, while useful in physics, also has applications in computer science and information theory. This section will explore the concept of information entropy as an abstract object.
We will first consider entropy not as related to the concept of heat in objects, but as a purely axiomatic quantification of what we mean by the information we receive when we hear the answer to a question. For a concrete example, imagine that someone flips a coin and doesn’t reveal which side landed upright and we ask “what was the outcome of the coin flip? Heads or tails?” When they now tell us the answer, how much “information” do we gain by learning what the outcome was? In other words, we are faced with determining how much information is received when we hear the answer to a question that has a probability distribution of outcomes. This question was asked by Claude Shannon in 1948.
After pondering this question for a long time, you might come up with a few criteria that any reasonable measure must obey, such as:
- If the question has two answers that are not dependent on each other, then the measure of information contained in answering both questions should be the same as the sum of the information gained in learning the answer to each one individually.For example, if we have two independent coin ﬂip experiments, the information gained in hearing the outcome of one coin ﬂip should be the same as the information gained in hearing the outcome of the other, so that the total information gained is the sum of the individual amounts of information gained.How does this concept generalize to questions which have interdependent answers? Two such questions might be “am I wearing gloves?” and “am I wearing a sweater?”. The exact statement of this criteria is more complicated and we will not ask you to consider it here.